Skip to content

Think Mode (Reasoning)

The Think checkbox in the chat panel activates extended reasoning on models that support it. The AI produces an explicit chain of thought before writing code — better quality on hard problems, higher token cost, longer response time.

When to use it

Turn Think ON for:

  • Algorithmic problems ("implement a custom skip list")
  • Complex state machines (multi-step wizards, async flows with branches)
  • Unfamiliar library integration where the AI might guess an API
  • Debugging a tricky crash (pair with a stack trace dump)
  • Multi-file refactors where decisions in one place affect another

Turn Think OFF for:

  • Small edits ("add a label next to the field")
  • UI tweaks ("make the button dark green")
  • Well-established patterns the AI has seen thousands of times
  • Quick iterations where speed matters more than quality

Which models support it

Backend Model Think supported?
Claude Haiku 4.5, Sonnet 4.5, Opus 4.5 Yes (extended thinking)
OpenAI (via OpenRouter) o1, o1-mini, o1-preview Yes (native)
DeepSeek DeepSeek-R1 Yes
DeepSeek V3 / V3.2 No
Gemini 2.5 Pro (thinking), 3.0 Yes
OpenRouter varies per model Check capability chips

When you tick Think on a model that doesn't support it, the checkbox greys out and the generation proceeds as if Think were off. No error.

What it costs

Reasoning tokens count toward your bill, and they can be substantial:

  • Typical non-reasoning generation: 5k-10k output tokens
  • Typical reasoning generation: 15k-30k output tokens (reasoning chain + final answer)

On Claude Sonnet that's roughly $0.05 vs $0.15 per generation. Worth it for hard problems; wasteful for trivial ones.

Think + Expand combined

Both checkboxes stack. Expand → detailed spec → Think reasons about the detailed spec → AI writes code.

This combination is the highest-quality mode Whittl has. Use it on the hard 5% of generations, not the routine 95%.

How the reasoning is displayed

By default, Whittl hides the reasoning chain (it's noisy and most users don't want to read it). A single collapsed "Thinking..." node in chat. Click to expand and see the full reasoning trace.

Preferences → Generation → Always show reasoning traces changes the default to always-expanded if you prefer to read along.

Where the setting lives

  • Per-request toggle: the Think checkbox in the chat panel
  • Default: Preferences → Generation → Chain of thought / Thinking mode. Default: OFF.

Most users leave the default off and toggle on per-request when they're about to ask for something hard.

What Think doesn't do

  • Doesn't improve non-reasoning models. If you tick Think on DeepSeek V3.2, the checkbox does nothing. Only reasoning-capable models actually reason.
  • Doesn't replace Agent Mode. Agent Mode is about how many rounds the AI gets; Think is about how much the AI reasons within a single round. They compose — Agent Mode + Think is "many rounds of careful reasoning."
  • Doesn't speed things up. Think is slower per-response, always. If latency matters more than quality, leave it off.

What's next