Providers at a glance
Whim supports three AI providers, each running natively in task containers. Switch providers per workspace or per task — every provider has access to the same dev environment, terminal, and git tools.| CCR + OpenRouter | Claude Subscription | Codex Subscription | |
|---|---|---|---|
| Runtime | Cloud Code Runtime (CCR) | Native Claude CLI | Native Codex CLI |
| Auth | Managed by Whim | Your Anthropic subscription | Your ChatGPT account |
| Default model | Claude Sonnet 4.5 | Claude Opus 4.6 | GPT 5.4 |
| Available models | 20+ (Claude, GPT, Gemini, Grok, DeepSeek, more) | Claude family | GPT 5.x family |
| Setup required | None | OAuth token | ChatGPT login |
| CU cost | Container time + API tokens | Container time only | Container time only |
| Fast mode | No | Yes (all models) | Yes (GPT 5.4) |
CCR + OpenRouter
Default provider for all new workspaces. No setup required.
Claude Subscription
Use your Anthropic subscription to run the native Claude Code CLI. Token costs go through your subscription — Whim only charges CU for container time. Best for: heavy usage where your subscription is more cost-effective, native Claude Code experience, maximum capability with Opus 4.6.Codex Subscription
Use your ChatGPT account to run the native Codex CLI. Token costs go through your OpenAI subscription — Whim only charges CU for container time. Best for: users with existing OpenAI subscriptions, GPT 5.4 capabilities.Choosing a provider
Easiest setup
Easiest setup
CCR + OpenRouter — default, no configuration, access to every model.
Lowest CU cost for heavy usage
Lowest CU cost for heavy usage
Claude Subscription or Codex Subscription — you only pay CU for container runtime. Token costs go through your existing subscription.
Try models from different vendors
Try models from different vendors
CCR + OpenRouter — the only provider with models from Anthropic, OpenAI, Google, xAI, DeepSeek, and others.
Most capable model
Most capable model
Claude Subscription with Opus 4.6, or Codex Subscription with GPT 5.4.
Available models
Claude (Anthropic)
Available via CCR + OpenRouter and Claude Subscription.| Model | Model ID | Strengths | Best for |
|---|---|---|---|
| Claude Opus 4.6 | claude-opus-4.6 | Highest capability, deep reasoning | Complex architecture, difficult bugs, nuanced refactors |
| Claude Sonnet 4.5 | claude-sonnet-4.5 | Strong balance of speed and quality | General-purpose coding, most tasks |
| Claude Sonnet 4.5 (1M) | claude-sonnet-4.5-1m | Extended 1M token context | Large codebases, cross-file analysis |
| Claude Haiku 4.5 | claude-haiku-4.5 | Fastest Claude model | Quick edits, simple tasks, rapid iteration |
GPT (OpenAI)
| Model | Model ID | Provider | Best for |
|---|---|---|---|
| GPT 5.4 | gpt-5.4 | Codex only | Flagship GPT tasks via native CLI |
| GPT 5.3 | gpt-5.3 | OpenRouter | General-purpose GPT coding |
| GPT 5.3 Codex | gpt-5.3-codex | OpenRouter, Codex | Code-optimized GPT 5.3 |
| GPT 5.3 Codex Spark (Preview) | gpt-5.3-codex-spark | OpenRouter, Codex | Fast, lightweight code tasks |
| GPT 5.2 | gpt-5.2 | OpenRouter | Budget-friendly GPT |
| GPT 5.2 Codex | gpt-5.2-codex | OpenRouter, Codex | Budget-friendly code-optimized GPT |
| GPT 5.1 Codex Mini | gpt-5.1-codex-mini | OpenRouter, Codex | Fastest/cheapest GPT option |
| GPT OSS 120B | gpt-oss-120b | OpenRouter | Open-source GPT variant |
| Model | Model ID | Provider | Best for |
|---|---|---|---|
| Gemini 3 Pro | gemini-3-pro-preview | OpenRouter | Complex reasoning, large context |
| Gemini 2.5 Flash | gemini-2.5-flash | OpenRouter | Fast, cost-effective tasks |
Other models
| Model | Model ID | Provider | Best for |
|---|---|---|---|
| Grok Code Fast | grok-code-fast-1 | OpenRouter | Rapid code generation |
| DeepSeek V3.2 | deepseek-v3.2 | OpenRouter | Cost-effective coding |
| MiniMax M2.5 | minimax-m2.5 | OpenRouter | General coding tasks |
| Qwen3 Coder Next | qwen3-coder-next | OpenRouter | Code-focused tasks |
| Kimi K2.5 | kimi-k2.5 | OpenRouter | General coding tasks |
Setting your default model
Defaults determine which model runs when you create new tasks:- Workspace default — applies to all members. Set by admins in Settings > Workspace > Defaults.
- User default — overrides workspace default. Set in Settings > My Defaults.
Per-task override
Override the default model when creating any task by selecting a different model in the composer toolbar. The override only affects that task.CU cost by provider
CCR + OpenRouter — CU has two components: container runtime (1 CU per 30 minutes) and API tokens (varies by model). More capable models cost more tokens per CU. Claude / Codex Subscription — CU covers container runtime only (1 CU per 30 minutes). Token costs go through your subscription, making these more CU-efficient for token-heavy tasks.Model selection tips
Start with the default, then adjust
Start with the default, then adjust
The default model for each provider is a good general-purpose choice. Switch if you need more speed or capability.
Flagship models for complex tasks
Flagship models for complex tasks
Claude Opus 4.6 and GPT 5.4 for deep reasoning — complex debugging, large refactors, architectural decisions.
Lighter models for simple tasks
Lighter models for simple tasks
Claude Haiku 4.5, Gemini 2.5 Flash, and GPT 5.1 Codex Mini for simple edits, boilerplate, and quick fixes.
Extended context for large codebases
Extended context for large codebases
Claude Sonnet 4.5 (1M context) when the agent needs to reason across many files simultaneously.

