Skills System¶
Whittl's skill system is a library of markdown files stored in ~/.whittl/skills/. Before each generation, relevant skills are injected into the AI's system prompt, giving it guidance drawn from past sessions, framework-specific patterns, and explicit rules you've taught it.
What makes Whittl's skills different from other tools' equivalents:
- Self-improving. Auto-fix patterns that correct real AI mistakes automatically become new skills. Your library grows without manual effort.
- First-class GUI. Edit, add, or disable skills through a dialog inside Whittl, not by hand-authoring YAML in a text editor.
- User-curatable. You decide what's in or out. Nothing is forced.
- Visible at injection time. Whittl logs which skills landed in which prompt so you can debug exactly what the AI saw.
Where skills live¶
~/.whittl/skills/
├── _auto_learned.md <- patterns captured from successful auto-fixes
├── code-quality.md <- general good-code guidance
├── coding-standards.md <- formatting, style rules
├── customtkinter-rules.md <- CTk-specific gotchas
├── flet-common-mistakes.md <- Flet framework quirks
├── flet-mobile.md <- mobile-specific Flet rules
├── flet-rules.md <- general Flet rules
├── pyside6-patterns.md <- PySide6 best practices
└── threading-patterns.md <- cross-thread safety in Qt
These ship with Whittl as starters. New skills you create through the GUI land here too.
The skills dialog¶
Open it from Edit → Preferences → Skills (or the cog icon → Skills tab).
The dialog shows every skill file with a checkbox indicating whether it's active in the current session, a summary of the skill's intent, and edit/add/remove buttons.
Activating and deactivating¶
Unchecking a skill disables it for generation. Useful when:
- You're working on a CustomTkinter app and don't want PySide6 rules injected
- You're experimenting with a new approach and want to bypass skill guidance temporarily
- A skill's guidance is outdated or wrong and you haven't had time to fix it
Re-checking reactivates. Changes take effect on the next generation.
Editing¶
Click a skill to open the markdown editor. Edit freely; save. Whittl re-parses the file on next generation.
The format is loose — Whittl doesn't enforce strict structure. Most skill files have:
- A brief summary at the top
- Headers like
## WRONGand## RIGHTwith code examples - Short bullets with rationale
The closer you keep your skill to how the AI should read it (as a set of rules), the more effectively the AI applies it.
Creating a new skill¶
Click Add New. A blank editor opens. Write your rule in markdown:
# Large QListWidget rows need QListWidget.setUniformItemSizes(True)
When a QListWidget contains 1000+ items with similar heights, setting
`setUniformItemSizes(True)` dramatically improves scroll performance.
## WRONG
list_widget = QListWidget()
list_widget.addItems(large_item_list)
## RIGHT
list_widget = QListWidget()
list_widget.setUniformItemSizes(True) # critical for large lists
list_widget.addItems(large_item_list)
Save. The skill now gets injected into future prompts where relevant.
Deleting¶
Click Remove. The file is moved to a .trash/ subfolder (not permanently deleted — you can restore manually if needed).
Frontmatter triggers + lazy loading (v2.4+)¶
v2.4 introduced content-aware skill injection. Skills can declare YAML frontmatter that gates injection on what the active project actually contains and how big the skill should be in the prompt. The goal: a typical "fix this typo" request shouldn't cost 2 KB of skill tokens that aren't relevant.
The format¶
---
triggers:
imports: [cv2, opencv-python]
framework: [pyside6]
eager: false
summary: "OpenCV + numpy patterns for image processing"
---
# cv2 + numpy Patterns
...rules...
Every field is optional. Skills without frontmatter behave exactly like in v2.3 — always-on, full content injected. No migration is required for an existing skill library.
Triggers — gate injection on project content¶
triggers decides whether a skill is even a candidate for injection. The skill matcher checks the active project's source files at generation time:
| Field | What it matches against | Semantics |
|---|---|---|
imports |
Top-level import X and from X import ... statements across project files |
OR within the list (any listed import matches) |
framework |
The active target (pyside6, flet, customtkinter, python) |
OR within the list |
When both imports and framework are declared, BOTH must match. So this skill:
…only injects for a PySide6 project that actually uses cv2. Pure desktop or pure cv2 isn't enough — you need both.
Empty trigger lists count as no constraint (imports: [] means "don't gate on imports"). Trigger matching is case-insensitive.
Top-level imports only
Whittl's import matcher uses regex against ^(?:from|import). Imports nested inside a function or class are intentionally skipped — they're too noisy for trigger purposes. If your skill should fire when a project uses X, make sure X is imported at module level somewhere.
eager — full content vs lazy index¶
For tool-use backends (Claude, GPT, Gemini, OpenRouter), Whittl uses a lazy loading path that only injects skill content the AI actually needs:
- Eager skills inject in full at every relevant generation (the v2.3 behavior).
- Lazy skills appear in a SKILL INDEX at the end of the system prompt — just title + 1-line summary. The AI calls
load_skill(name)to pull the body when relevant.
The default policy:
| Skill type | Default mode |
|---|---|
| No frontmatter | Eager (legacy back-compat) |
Frontmatter without eager field |
Lazy |
Frontmatter with eager: true |
Eager (forced) |
_auto_learned.md |
Always Eager (special-cased) |
Set eager: true to force a skill into the always-injected pool — useful for foundational rules you want the AI to see no matter what:
You can combine triggers with eager: true: "only when relevant, but in full when it is":
---
triggers:
imports: [requests]
eager: true
summary: "HTTP error patterns the autofix can't catch"
---
This drops out entirely when the project doesn't import requests, but injects in full content when it does.
summary — what shows up in the index¶
For lazy skills, summary is what the AI sees in the SKILL INDEX. Keep it specific:
✅ Good summaries (concrete, signal what's covered):
summary: "OpenCV + numpy: BGR/RGB, dtype clipping, QImage stride, GPU round-trip"
summary: "PySide6: Qt enum scoping, theme/stylesheet conflicts, signal/slot setup"
summary: "Flet 0.28.3 critical rules: page.update(), threading, dialog APIs"
❌ Bad summaries (the AI can't tell when to load):
When summary is omitted, Whittl derives one from the first markdown heading in the file. It's better to write one explicitly.
How tokens shake out¶
Concrete numbers for a typical 50-request session against a PySide6 image-editing project:
| Request type | v2.3 | v2.4+ | Savings |
|---|---|---|---|
| Trivial (typo, cosmetic) | ~2000 tokens of skills | ~125 tokens (index only) | 94% |
| Feature work, 1-2 skills relevant | ~2000 (capped) | ~875 | 56% |
| Heavy / multi-domain | ~2000 (silently truncated) | ~3375 (no truncation) | costs more — but no rule loss |
| Per-session avg | ~2000 | ~800 | ~60% |
The hidden quality win: v2.3's 8 KB cap silently truncated alphabetically-last skills when many were relevant. With lazy loading, the AI loads exactly what it needs in full — every relevant rule applies, no silent drops.
Worked examples¶
A library skill that should only fire when the project actually uses the library:
---
triggers:
imports: [pandas]
summary: "pandas patterns: SettingWithCopy warnings, dtype hygiene, performance gotchas"
---
# pandas Patterns
...
A framework-targeted skill that's only relevant for a specific target:
---
triggers:
framework: [flet]
summary: "Flet mobile gotchas: tabs locking on Android, page.window vs page.width, with_opacity bg"
---
# Flet Mobile Patterns
...
A foundational always-on skill (rare — most should be lazy):
A skill scoped to a specific stack (cv2 + PySide6 image work, like RedLight):
---
triggers:
imports: [cv2, opencv-python]
framework: [pyside6]
summary: "cv2 + Qt image handoff: QImage stride, BGR/RGB, GPU round-trip"
---
# cv2 + PySide6 Patterns
...
What gets created when you click "Add New"¶
The starter template now includes a commented-out frontmatter block that teaches the format:
<!-- Optional: uncomment and edit to make this skill
content-aware (only inject when relevant) and
lazy-loaded (AI pulls it on demand via load_skill).
Skills WITHOUT this block always inject — the
legacy v2.3 behavior.
---
triggers:
imports: [package_one, package_two]
framework: [pyside6] # or flet, customtkinter
eager: false # true = always inject in full
summary: "One-line description for the skill index"
---
-->
# my-new-skill
Uncomment the YAML block to wire your skill into content-aware injection. Leave it commented to keep legacy always-on behavior.
Triggers and the ecosystem¶
Skills you author with frontmatter are portable to Claude Code and OpenCode — Anthropic's SKILL.md format uses the same YAML conventions. The reverse is also true: any SKILL.md from those tools that uses frontmatter you recognize will trigger correctly in Whittl.
Auto-learned skills¶
~/.whittl/skills/_auto_learned.md is special. Whittl writes to it automatically whenever an auto-fix rule successfully corrects AI-generated code. Each entry captures:
- Timestamp and backend that produced the original mistake
- The pattern description that described the fix
Example entries (from a real _auto_learned.md file):
### 2026-03-31 15:27 (gemini)
- **Pattern:** Replaced hallucinated Flet control names with correct ones
### 2026-03-31 15:27 (gemini)
- **Pattern:** Fixed ft.icons/ft.colors → ft.Icons/ft.Colors (required for mobile)
### 2026-04-18 12:30 (openrouter)
- **Pattern:** ui/styles.py: Added 23 missing imports (QAbstractItemView, QComboBox, QDoubleSpinBox...)
### 2026-04-20 14:13 (openrouter)
- **Pattern:** core/library.py: Removed QVariant conversion methods (not needed in Python)
Over a typical month of usage, expect 30–60 entries to accumulate organically. Each one represents a real mistake a real model made that real auto-fix caught — and now the AI sees that pattern on every future generation.
Auto-learned is the single best reason Whittl gets better over time
Most other AI coding tools are static — today's output quality is the same as day one's. Whittl's auto-learned file compounds. After three months of regular use, your installation has absorbed patterns specific to your backends and your kinds of projects. That's a unique-to-you knowledge layer.
When skills get injected¶
Skills are injected at the top of the system prompt before each generation. Whittl's skill matcher decides which skills are eligible based on:
- Target framework. PySide6 skills inject for PySide6 projects, Flet skills for Flet projects, etc. Comes from filename prefix (
pyside6-*) or explicittriggers.framework. - Project content (v2.4+). Skills with
triggers.importsonly inject when the project actually imports a listed library. - Always-on skills. Skills without frontmatter, plus skills with
eager: true, plus_auto_learned.md, inject for every relevant generation.
For tool-use backends, eligible-but-lazy skills land in a SKILL INDEX at the end of the system prompt. The AI calls load_skill(name) to pull bodies on demand.
You'll see a log line in the verbose output:
[SKILLS] Injected 1842 chars (1500 whittl, 342 claude-user) into system prompt: ['# Code Quality Rules', '# Auto-Learned Patterns']
The whittl / claude-user / claude-project breakdown (v2.4+) shows which discovery path each skill came from — useful when you're debugging surprise token usage from a Claude Code skill that started auto-injecting.
If you don't see a skill you expected, check:
- Is it disabled in the Skills dialog?
- Does its
triggers.importsmatch anything actually imported in this project? - Did its
triggers.frameworkmatch the current target? - Is it lazy (frontmatter present, no
eager: true) — in which case it's in the index waiting for the AI to callload_skill?
Effectiveness¶
Skills are most effective when they:
- Describe specific, recurring mistakes. "Don't use
QApplication.quit()in a signal handler" is actionable. "Write clean code" is not. - Pair WRONG and RIGHT code examples. Models learn much faster from side-by-side patterns than from prose.
- Scope narrowly. A skill titled "Flet GridView performance" that's 80 lines of text competes with other skills for the AI's attention. Split it into 2–3 shorter skills if needed.
- Match patterns the AI actually gets wrong. Look in
_auto_learned.mdto see what your backends consistently mess up — those are the highest-value targets for hand-authored skills.
Claude Code / OpenCode compatibility (v2.4+)¶
Anthropic's SKILL.md format (YAML frontmatter + markdown body) is becoming the cross-tool standard — Claude Code, OpenCode, and Whittl all read it. v2.4 extends the skill discovery path to include three locations in priority order:
~/.whittl/skills/(Whittl's own — wins on filename collisions to protect vetted defaults)~/.claude/skills/(user-global Claude Code skills)./.claude/skills/(project-local, checked into version control alongside your code)
Skills authored for one tool work in any of them unchanged. You don't maintain a separate library per tool.
A new Preferences toggle (AI Generation → Include Claude Code skill paths) defaults ON. Disable it to revert to v2.3-style Whittl-only discovery if you want token cost bounded to Whittl's own skills only. The injection log shows a per-source breakdown so you can spot foreign skills inflating the prompt.
Troubleshooting¶
Skills dialog shows my skill but it doesn't seem to apply
Check the [SKILLS] log line in verbose mode. If your skill isn't listed there, it wasn't matched for that generation. Reasons:
- Skill name doesn't match the framework (PySide6 skills don't inject for Flet projects)
- Skill is unchecked (disabled) in the dialog
- Skill file has a parse error (Whittl logs parse errors at startup)
_auto_learned.md is huge and the system prompt is getting too long
After a year of heavy use this file can grow to thousands of lines. Whittl truncates it at the head (keeping the most recent entries) if it exceeds a threshold, but you can also manually curate: review old entries and delete ones that no longer apply.
I accidentally deleted an important skill
Check ~/.whittl/skills/.trash/ — Whittl keeps deleted skills there as a recovery net.
What's next¶
- Auto-fix Rules — the rule engine that feeds
_auto_learned.md - Agent Mode — how skills interact with Agent Mode's extended tool loop
- Settings — the Skills dialog in context