Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.whim.run/llms.txt

Use this file to discover all available pages before exploring further.

Overview

A task is an AI agent session running in its own isolated container. Write a prompt, pick a provider and model, and Whim spins up a full dev environment where the agent works on your codebase.

Writing your prompt

The task composer sits at the bottom of the sidebar. Type or paste instructions — it supports Markdown formatting (headings, lists, code blocks, links).
Good prompts are specific about the desired outcome. Instead of “fix the bug”, try “Fix the 404 error on the /api/users endpoint when the user ID contains a hyphen.”
Press Tab to toggle between creating a task (runs immediately) and a todo (saved to your backlog). Press Cmd/Ctrl + Enter to submit. Your draft is automatically saved per-workspace, so you won’t lose it if you navigate away.

Selecting a provider and model

Click the provider button (e.g. “Claude”) in the composer toolbar to choose between:
ProviderRuntimeAuth
ClaudeNative Claude Code CLIYour Anthropic subscription
CodexNative Codex CLIYour ChatGPT account
CCRCloud Code Runtime via OpenRouterComing soon for alpha users
CCR/OpenRouter support is coming soon for alpha users. Today, tasks can run with Claude or Codex.
After selecting a provider, click the model button to pick a specific model. Your workspace default is pre-selected. See Providers & Models for the full model list and comparison.

Adding file attachments

Drag and drop files onto the composer, paste from your clipboard, or click the attachment button to browse. Attached files appear as thumbnails below the prompt. Limits:
  • 10 files per task, 25 MB per file
  • Supported: images (PNG, JPG, GIF, WebP, SVG), PDFs, text, code files (TS, JS, Python, Go, Rust, etc.), config (JSON, YAML, TOML), data (CSV, TSV)
  • Executables (.exe, .dmg, .dll, etc.) are blocked

Choosing a base branch

Click the branch selector to pick which remote base branch the task starts from. The workspace default branch (usually main, meaning origin/main) is pre-selected.
  • The task gets its own isolated branch created from your chosen remote base branch
  • If your workspace has tracked branch images, tasks launched from those branches use pre-built environments for faster startup
The branch selector shows image status indicators when workspace images are available. An exact match means the image perfectly reflects the branch’s current state. A best-fit match means a compatible image is available but may be slightly out of date.

Task options

Mode

Toggle between Simple and Orchestrator mode:
  • Simple — the agent works directly on the task (default)
  • Orchestrator — the agent plans and delegates work to child tasks
See Orchestrator Mode for details.

Fast mode and reasoning

For Claude tasks, toggle fast mode to prioritize speed over reasoning depth. For Codex tasks, set the reasoning effort level. See Provider Setup for details.

Parent context

When creating a subtask under an existing task, toggle parent context to link the new task as a child. Child tasks inherit the parent’s branch context.

Multi-try

Multi-try launches the same prompt as multiple independent tasks (up to 4), each in its own container. Because AI agents are non-deterministic, each attempt may take a different approach — letting you compare and pick the best result. To use multi-try:
  1. Write your prompt in the composer
  2. Click the multi-try button in the toolbar to cycle through attempt counts (1 → 2 → 3 → 4 → 1)
  3. Launch the task
All attempts share the same prompt, provider, model, and remote base branch but get their own isolated branches and containers. Multi-try tasks appear as a group in your workspace — compare them side by side, then complete the ones you don’t want and keep the winner.
Multi-try is especially useful for problems with multiple valid solutions — UI implementations, algorithm choices, or architectural decisions.

Batch creation

Batches let you create multiple distinct tasks or todos in a single operation. Each item gets its own prompt and settings, but they share a common relationship.

When to use batches

  • Decomposing work — break a large initiative into specific child tasks
  • Parallel workstreams — launch several independent features at once
  • Backlog seeding — create a set of todos for a sprint or project

Batch settings

All items in a batch share:
  • Kind — running tasks or backlog todos
  • Relationship — children of a parent task or top-level items
  • Parent — an optional explicit parent for all items
Each item can have its own title, prompt, priority, effort, tags, and assignee.
Batches are limited to 10 items per operation.

Launching and monitoring

Click Send or press Cmd/Ctrl + Enter to launch. Whim will:
  1. Create an isolated Git branch from your selected remote base branch
  2. Spin up a container with your workspace’s dev environment
  3. Start the AI agent with your prompt and any attached files
  4. Stream progress to the terminal in real time
Once launched, you’ll see startup progress, agent activity (reading files, writing code, running commands), and status indicators (Working, Idle, Needs Input). You can interact with the terminal, send follow-up prompts, or pause the task at any time.
You cannot launch a task if your provider is not connected, you’re offline, or your compute unit balance is exhausted.