Iterating on Generated Code¶
First generations are almost never finished products. The real workflow is: generate → run → decide what's wrong → prompt for the fix → run again. This page is the practical guide to that loop.
Two kinds of changes¶
Whittl routes every modification through one of two paths based on what you're asking:
Surgical edit (fast, cheap)¶
- When it's used: small, targeted changes. "Change the button color." "Rename this function." "Fix the error on line 47."
- How it works: the AI reads the current code, calls an
edit_codetool targeting just the lines that need to change, and commits the diff. - Cost: typically $0.005–$0.05 per modification on Claude. $0.001–$0.01 on cheaper models.
- Speed: 5–30 seconds.
Full regeneration (structural refactors)¶
- When it's used: big structural changes. "Split this into multiple files." "Convert from tkinter to PySide6." "Completely redesign the UI."
- How it works: the AI rewrites the file(s) from scratch, incorporating your new requirements plus the existing functionality.
- Cost: 5–15× more than a surgical edit depending on file size.
- Speed: 30 seconds to 2 minutes.
Whittl's planner decides which path each prompt takes. You'll see:
[PLANNER] Action: modify_codein the log = surgical edit[REFACTOR] Structural refactor detected — skipping surgical/diff fast paths= full regen
You don't need to think about this explicitly. Phrase the prompt naturally; the planner routes correctly in 95%+ of cases.
Writing good follow-up prompts¶
The best follow-ups are specific about what's wrong AND specific about what should happen instead. Compare:
Vague follow-up (poor results)
"Make it better."
The AI has no idea what "better" means. It guesses. You get something arbitrary.
Specific follow-up (good results)
"The settings dialog is too tall — constrain it to at most 500px and let content scroll if needed."
The AI knows exactly what to change (dialog height), what the constraint is (500px max), and what the escape hatch should be (scrolling).
Other useful patterns:
- "When X happens, Y should happen instead" describes a user-visible behavior change clearly.
- "Add X near Y" gives the AI a locator for the change.
- "Like the one in file Z" tells the AI to mirror an existing pattern rather than invent a new one.
The iteration loop¶
Typical session with ~5 iterations on a modest project:
- Generate the first version. Don't over-specify. Let the AI propose.
- Run it (Test Run / F5). Look at what's wrong.
- Fix structural issues first. Missing features, wrong layout, broken interactions.
- Then polish details. Colors, spacing, specific text.
- Then edge cases. Empty state handling, validation messages, keyboard shortcuts.
Each prompt is typically 1-3 sentences. Don't write paragraphs — the AI's context window is finite and longer prompts bury the actual change.
When surgical edits work¶
The surgical path is best for 80% of follow-ups:
- "Change the accent color to navy."
- "Make the Save button auto-disable when the form is invalid."
- "Add a tooltip to the settings icon explaining what it does."
- "Fix the crash on line 147."
- "Rename
processData()toprocessRawData()everywhere."
When to ask for regeneration explicitly¶
Sometimes you want a full rewrite of a specific file even though the change itself looks small. Force it:
- "Regenerate main.py from scratch. The current version has accumulated cruft I don't want to preserve."
- "Rewrite the settings dialog — I don't like the structure."
Whittl passes regeneration requests through the full-regen path even when they'd normally qualify for surgical editing.
Running multiple iterations efficiently¶
Keep the chat, don't start over¶
The AI's context window holds your conversation history. Each new prompt builds on the previous ones. If you created the project in the same session and make 10 follow-up prompts, the AI still remembers the original spec and every intermediate change.
Don't start a new project for every iteration. Don't clear the chat unless something has gone off the rails.
Switch backends mid-session for cost¶
Set up a cheap model (Haiku, Qwen free) and a premium model (Sonnet, Gemini Pro). Use cheap for iteration, premium for the one hard problem per hour. Session history carries across the switch — the premium model sees everything the cheap model did.
Fold multiple small changes into one prompt¶
Instead of:
- "Make the button red."
- "Also add a hover effect."
- "Also make it disabled when clicked."
Write: "Make the Submit button red with a hover effect, and disable it after the first click."
One generation instead of three. Same result.
Don't fold unrelated changes¶
The opposite extreme fails. If you ask for:
- "Change the theme to dark, add a settings menu, implement keyboard shortcuts, and add undo history."
the AI treats it as one massive task. Quality drops because the model tries to cover four features at once instead of one at a time. Split across 2-4 prompts.
The History panel¶
Every generation is saved to a version history. Click the History button in the preview panel bar to see past versions.
What you see:
- Timestamp for each generation
- The prompt that produced it
- A one-line summary of what changed
- A Restore button to roll back
Restore rolls the code back to that version without deleting newer ones. You can restore forward again to get back to where you were.
Restore is a safety net, not a crutch
If you're frequently restoring, your follow-up prompts are too ambitious or too vague. Break them down into smaller pieces; each one is less likely to need a rollback.
Hand-editing generated code¶
You can edit files directly in Whittl's editor or in any external editor (VS Code, PyCharm, vim). The files on disk are canonical.
Caveats:
- If you hand-edit in Whittl, save before the next generation. The AI sees the saved version, not your unsaved changes.
- External edits work fine, but Whittl's in-app editor won't reflect them until you click Refresh or refocus the window.
- Do not edit while a generation is in progress. You'll create conflicts.
Best practice: let the AI do the first pass, then hand-edit detail-level things the AI keeps getting wrong (particular nested margins, very specific copy). For most other changes, keep using the AI — you'll move faster and the code stays consistent with its own style.
Inline AI help (right-click)¶
For single-block fixes or explanations you don't want to bounce to chat for:
- Highlight code → right-click → Explain Selection — AI explains the selection in chat, code untouched. Good for "what does this regex do?" moments.
- Highlight code → right-click → Fix Selection with AI — AI rewrites JUST the selected range. No full-file regeneration, no cross-file edits.
- Explain button in the editor toolbar — toggles whole-file explanations as inline
#:comments. Cached, so toggle-off and toggle-back-on is instant.
See Explain Mode & Inline AI Help for full details.
When to clear chat and start fresh¶
Occasionally a session goes sideways: the AI makes a bad structural choice early, you try to iterate out of it, and every subsequent prompt gets tangled in the bad choice. Signs this has happened:
- The same problem keeps appearing despite fixes
- The AI makes contradictory changes across prompts
- Generation quality has visibly dropped from earlier in the session
Fix: click the trash icon in the chat panel → Clear chat. This clears the AI's conversation history but keeps your code. Then write a new top-level prompt:
"Refactor this so the settings logic isn't scattered across main.py and window.py. Move it to a new core/settings.py module."
The AI reads the current code fresh and tackles it without the baggage of earlier bad guesses.
When to rollback vs. iterate¶
| Situation | Action |
|---|---|
| Code compiles, runs, but isn't quite right | Iterate — next prompt fixes it |
| Code crashes with a traceback | Iterate — auto-fix will try first, you can pitch in |
| Code works but you made a bad earlier decision | Restore to the last good version, re-prompt |
| The AI keeps making the same mistake | Clear chat + restore + re-prompt with different framing |
What's next¶
- Debugging a Crash — specifically when the "Run" button produces an error
- Auto-fix Rules — what's happening automatically when you iterate
- Skills System — if you keep writing the same follow-up, write a skill instead