Skip to main content

Providers at a glance

Whim supports three AI providers, each running natively in task containers. Switch providers per workspace or per task — every provider has access to the same dev environment, terminal, and git tools.
CCR + OpenRouterClaude SubscriptionCodex Subscription
RuntimeCloud Code Runtime (CCR)Native Claude CLINative Codex CLI
AuthManaged by WhimYour Anthropic subscriptionYour ChatGPT account
Default modelClaude Sonnet 4.5Claude Opus 4.6GPT 5.4
Available models20+ (Claude, GPT, Gemini, Grok, DeepSeek, more)Claude familyGPT 5.x family
Setup requiredNoneOAuth tokenChatGPT login
CU costContainer time + API tokensContainer time onlyContainer time only
Fast modeNoYes (all models)Yes (GPT 5.4)

CCR + OpenRouter

Default provider for all new workspaces. No setup required.
CCR connects to models through OpenRouter, a multi-model API gateway. Widest range of models, zero configuration. Best for: trying different models, using non-Claude/non-GPT models, getting started quickly. How it works: Whim manages the OpenRouter connection. API token costs are tracked as CU alongside container runtime. Pick any model when creating a task or set a workspace default.

Claude Subscription

Use your Anthropic subscription to run the native Claude Code CLI. Token costs go through your subscription — Whim only charges CU for container time. Best for: heavy usage where your subscription is more cost-effective, native Claude Code experience, maximum capability with Opus 4.6.

Codex Subscription

Use your ChatGPT account to run the native Codex CLI. Token costs go through your OpenAI subscription — Whim only charges CU for container time. Best for: users with existing OpenAI subscriptions, GPT 5.4 capabilities.

Choosing a provider

CCR + OpenRouter — default, no configuration, access to every model.
Claude Subscription or Codex Subscription — you only pay CU for container runtime. Token costs go through your existing subscription.
CCR + OpenRouter — the only provider with models from Anthropic, OpenAI, Google, xAI, DeepSeek, and others.
Claude Subscription with Opus 4.6, or Codex Subscription with GPT 5.4.

Available models

Claude (Anthropic)

Available via CCR + OpenRouter and Claude Subscription.
ModelModel IDStrengthsBest for
Claude Opus 4.6claude-opus-4.6Highest capability, deep reasoningComplex architecture, difficult bugs, nuanced refactors
Claude Sonnet 4.5claude-sonnet-4.5Strong balance of speed and qualityGeneral-purpose coding, most tasks
Claude Sonnet 4.5 (1M)claude-sonnet-4.5-1mExtended 1M token contextLarge codebases, cross-file analysis
Claude Haiku 4.5claude-haiku-4.5Fastest Claude modelQuick edits, simple tasks, rapid iteration

GPT (OpenAI)

ModelModel IDProviderBest for
GPT 5.4gpt-5.4Codex onlyFlagship GPT tasks via native CLI
GPT 5.3gpt-5.3OpenRouterGeneral-purpose GPT coding
GPT 5.3 Codexgpt-5.3-codexOpenRouter, CodexCode-optimized GPT 5.3
GPT 5.3 Codex Spark (Preview)gpt-5.3-codex-sparkOpenRouter, CodexFast, lightweight code tasks
GPT 5.2gpt-5.2OpenRouterBudget-friendly GPT
GPT 5.2 Codexgpt-5.2-codexOpenRouter, CodexBudget-friendly code-optimized GPT
GPT 5.1 Codex Minigpt-5.1-codex-miniOpenRouter, CodexFastest/cheapest GPT option
GPT OSS 120Bgpt-oss-120bOpenRouterOpen-source GPT variant

Google

ModelModel IDProviderBest for
Gemini 3 Progemini-3-pro-previewOpenRouterComplex reasoning, large context
Gemini 2.5 Flashgemini-2.5-flashOpenRouterFast, cost-effective tasks

Other models

ModelModel IDProviderBest for
Grok Code Fastgrok-code-fast-1OpenRouterRapid code generation
DeepSeek V3.2deepseek-v3.2OpenRouterCost-effective coding
MiniMax M2.5minimax-m2.5OpenRouterGeneral coding tasks
Qwen3 Coder Nextqwen3-coder-nextOpenRouterCode-focused tasks
Kimi K2.5kimi-k2.5OpenRouterGeneral coding tasks

Setting your default model

Defaults determine which model runs when you create new tasks:
  • Workspace default — applies to all members. Set by admins in Settings > Workspace > Defaults.
  • User default — overrides workspace default. Set in Settings > My Defaults.
Each provider has its own default model. When you switch providers, the model switches to that provider’s default unless you’ve set a specific override.

Per-task override

Override the default model when creating any task by selecting a different model in the composer toolbar. The override only affects that task.
Use per-task overrides for a more capable model on complex work, or a lighter model for simple tasks.

CU cost by provider

CCR + OpenRouter — CU has two components: container runtime (1 CU per 30 minutes) and API tokens (varies by model). More capable models cost more tokens per CU. Claude / Codex Subscription — CU covers container runtime only (1 CU per 30 minutes). Token costs go through your subscription, making these more CU-efficient for token-heavy tasks.

Model selection tips

The default model for each provider is a good general-purpose choice. Switch if you need more speed or capability.
Claude Opus 4.6 and GPT 5.4 for deep reasoning — complex debugging, large refactors, architectural decisions.
Claude Haiku 4.5, Gemini 2.5 Flash, and GPT 5.1 Codex Mini for simple edits, boilerplate, and quick fixes.
Claude Sonnet 4.5 (1M context) when the agent needs to reason across many files simultaneously.