Documentation Index
Fetch the complete documentation index at: https://docs.whim.run/llms.txt
Use this file to discover all available pages before exploring further.
Providers at a glance
CCR + OpenRouter is not yet available during the alpha period. Claude and
Codex are the two supported providers.
| Claude Subscription (Recommended) | Codex Subscription | CCR + OpenRouter (Coming Soon) | |
|---|---|---|---|
| Runtime | Native Claude CLI | Native Codex CLI | Cloud Code Runtime (CCR) |
| Auth | Your Anthropic subscription | Your ChatGPT account | Managed by Whim |
| Recommended model | Claude Opus 4.6 (1M) | GPT 5.5 | Claude Sonnet 4.5 |
| Available models | Claude family | GPT 5.x family | 20+ (Claude, GPT, Gemini, Grok, DeepSeek, more) |
| Setup required | OAuth token | ChatGPT login | None when launched |
| CU cost | Container time only | Container time only | Container time + API tokens |
| Fast mode | Yes (all Claude models) | Yes (native GPT models) | No |
| Reasoning effort | Yes (Low/Medium/High/Max) | Yes (Low/Medium/High/xhigh) | No |
CCR + OpenRouter
Coming soon. CCR + OpenRouter is currently disabled during alpha, so it is not
selectable for workspaces or tasks.
Claude Subscription
Use your Anthropic subscription to run the native Claude Code CLI. Token costs go through your subscription and Whim only charges CU for container time. Best for: the primary Whim experience during alpha, native Claude Code behavior, and maximum capability with Opus 4.6.Codex Subscription
Use your ChatGPT account to run the native Codex CLI. Token costs go through your OpenAI subscription and Whim only charges CU for container time. Best for: existing OpenAI subscribers, GPT 5.5, and Codex-tuned GPT models.Choosing a provider
Recommended during alpha
Recommended during alpha
Claude Subscription - the broadest current support in Whim, including
Opus 4.6 and Opus 4.6 (1M).
Lowest CU cost for heavy usage
Lowest CU cost for heavy usage
Claude Subscription or Codex Subscription - you only pay CU for
container runtime. Token costs go through your existing subscription.
GPT workflows
GPT workflows
Codex Subscription - native GPT 5.x support, including GPT 5.5 and
Codex-tuned GPT models.
Multi-vendor model access
Multi-vendor model access
CCR + OpenRouter - coming soon. This is where Google, xAI, DeepSeek,
MiniMax, Qwen, and Moonshot models will land.
Available models
Claude (Anthropic)
Available now via Claude Subscription. Selected Claude models also gain CCR support when CCR + OpenRouter launches.| Model | Model ID | Availability | Strengths | Best for |
|---|---|---|---|---|
| Claude Opus 4.6 | claude-opus-4.6 | Claude now; CCR later | Highest capability, deep reasoning | Complex architecture, difficult bugs, nuanced refactors |
| Claude Opus 4.6 (1M) | claude-opus-4.6-1m | Claude only | Extended 1M token context | Complex architecture with large codebases |
| Claude Sonnet 4.5 | claude-sonnet-4.5 | Claude now; CCR later | Strong balance of speed and quality | General-purpose coding, most tasks |
| Claude Sonnet 4.5 (1M) | claude-sonnet-4.5-1m | Claude now; CCR later | Extended 1M token context | Large codebases, cross-file analysis |
| Claude Haiku 4.5 | claude-haiku-4.5 | Claude now; CCR later | Fastest Claude model | Quick edits, simple tasks, rapid iteration |
Claude Opus 4.6 (1M) is available only through Claude Subscription. It is not
exposed through OpenRouter.
GPT (OpenAI)
Codex-supported GPT models are available now. OpenRouter-only GPT models will
become available when CCR + OpenRouter launches.
| Model | Model ID | Availability | Best for |
|---|---|---|---|
| GPT 5.5 | gpt-5.5 | Codex now | Latest GPT tasks via native CLI |
| GPT 5.4 | gpt-5.4 | Codex now | Previous flagship GPT via Codex |
| GPT 5.3 | gpt-5.3 | OpenRouter when CCR launches | General-purpose GPT coding |
| GPT 5.3 Codex | gpt-5.3-codex | Codex now; OpenRouter later | Code-optimized GPT 5.3 |
| GPT 5.3 Codex Spark (Preview) | gpt-5.3-codex-spark | Codex now; OpenRouter later | Fast, lightweight code tasks |
| GPT 5.2 | gpt-5.2 | OpenRouter when CCR launches | Budget-friendly GPT |
| GPT 5.2 Codex | gpt-5.2-codex | Codex now; OpenRouter later | Budget-friendly code-optimized GPT |
| GPT 5.1 Codex Mini | gpt-5.1-codex-mini | Codex now; OpenRouter later | Fastest and cheapest GPT option |
| GPT OSS 120B | gpt-oss-120b | OpenRouter when CCR launches | Open-source GPT variant |
Google models are wired up for CCR + OpenRouter and will become selectable
when CCR launches.
| Model | Model ID | Availability | Strengths | Best for |
|---|---|---|---|---|
| Gemini 3 Pro | gemini-3-pro-preview | OpenRouter when CCR launches | Complex reasoning, large context | Large-context analysis and difficult reasoning |
| Gemini 3 Flash | gemini-3-flash-preview | OpenRouter when CCR launches | Fast, cost-effective | Quick tasks, rapid iteration |
| Gemini 2.5 Flash | gemini-2.5-flash | OpenRouter when CCR launches | Fast, cost-effective | Quick edits, low-latency tasks |
xAI
xAI models become available when CCR + OpenRouter launches.
| Model | Model ID | Availability | Best for |
|---|---|---|---|
| Grok Code Fast | grok-code-fast-1 | OpenRouter when CCR launches | Rapid code generation |
DeepSeek
DeepSeek models become available when CCR + OpenRouter launches.
| Model | Model ID | Availability | Best for |
|---|---|---|---|
| DeepSeek V3.2 | deepseek-v3.2 | OpenRouter when CCR launches | Cost-effective coding |
MiniMax
MiniMax models become available when CCR + OpenRouter launches.
| Model | Model ID | Availability | Best for |
|---|---|---|---|
| MiniMax M2.5 | minimax-m2.5 | OpenRouter when CCR launches | General coding tasks |
Qwen
Qwen models become available when CCR + OpenRouter launches.
| Model | Model ID | Availability | Best for |
|---|---|---|---|
| Qwen3 Coder Next | qwen3-coder-next | OpenRouter when CCR launches | Code-focused tasks |
Moonshot
Moonshot models become available when CCR + OpenRouter launches.
| Model | Model ID | Availability | Best for |
|---|---|---|---|
| Kimi K2.5 | kimi-k2.5 | OpenRouter when CCR launches | General coding tasks |
Setting your default model
Defaults determine which model runs when you create new tasks:- Workspace default - applies to all members. Set by admins in Settings > Workspace > Defaults.
- User default - overrides workspace default. Set in Settings > My Defaults.
CCR + OpenRouter will use the same default-model flow when it launches, but it
is not currently selectable during alpha.
Per-task override
Override the default model when creating any task by selecting a different model in the composer toolbar. The override only affects that task.CU cost by provider
During alpha, the supported providers are Claude Subscription and Codex Subscription, so CU covers container runtime only (1 CU per 30 minutes). Token costs go through your subscription.CCR + OpenRouter is coming soon. When it launches, CU will have two
components: container runtime and API tokens, with token cost varying by
model.
Model selection tips
Start with the default, then adjust
Start with the default, then adjust
Claude Opus 4.6 (1M) and GPT 5.5 are the current defaults. Switch if you
need more speed, lower cost, or a smaller context window.
Flagship models for complex tasks
Flagship models for complex tasks
Claude Opus 4.6, Claude Opus 4.6 (1M), and GPT 5.5 for deep reasoning -
complex debugging, large refactors, architectural decisions.
Lighter models for simple tasks
Lighter models for simple tasks
Claude Haiku 4.5 and GPT 5.1 Codex Mini for simple edits, boilerplate, and
quick fixes. Gemini 3 Flash and Gemini 2.5 Flash join this lane when CCR
launches.
Extended context for large codebases
Extended context for large codebases
Claude Opus 4.6 (1M) or Claude Sonnet 4.5 (1M) when the agent needs to
reason across many files simultaneously.

