A Mental Model for Every Concept
Too many concepts
Rules MCP Servers Modes Hooks Tools Sub-agents Commands SkillsLet's break them down, one by one.
"The Constitution"
Instructions that are included in every conversation
Codebase conventions, business logic, common pitfalls
Multiple rule files get merged into one block of context
Stored in version control, right next to your code
"What should the agent always know, regardless of the task."
"Building Blocks"
The basic functions the agent can call
Read files, write files, run shell commands, search code
Each tool = one small action the model can take
This is what turns a chatbot into an agent
"The building blocks. Everything else is made from these."
"Shortcuts"
Ready-made prompts you can trigger on demand
Invoked via slash prefix: /review, /test, /deploy
The human presses the button, not the LLM
Stored in git, shared across your team
"A macro for your AI agent. One trigger, a whole workflow."
"The Universal Adapter"
Standardized protocol to connect agents to external systems
Handles login and authentication for you
Gives your agent access to Slack, Jira, GitHub, Postgres, etc.
Each server = a set of new tools the agent can use
"USB ports for your agent. Plug in any external service."
"Bundled Expertise"
Multi-step workflows built from tools + prompts
Can include scripts, executables, and MCP connections
Only loaded when needed, zero cost when not in use
Easy to package, share across teams and projects
"A tool lets the agent write a file. A skill lets it build a whole feature."
"Different Hats"
Change how the agent behaves for a given task
Swap out prompts, tools, and guardrails
Give it more or less freedom depending on the job
Easy to switch from the editor UI
"Same agent, different personality for the job at hand."
"The Team"
Spin up smaller AI workers for specific subtasks
Each one gets its own role, scope, and limited tools
Can run in parallel for complex refactors
Parent agent coordinates and combines the results
"A manager who delegates tasks to specialized workers."
"Event Listeners"
User-defined scripts at specific lifecycle events
Pre-run: fetch context, lint, validate input
Post-run: auto-format, run tests, notify, deploy
100% deterministic in a non-deterministic system
"Git hooks, but for your AI agent."
For any concept, just ask:
Who decides? Does the LLM choose to use this, or does a human/script?
When is it active? Always on, on demand, or on a specific event?
Is it deterministic? Same input, same output, every time?
But what if they all fit into just 3 buckets?
Every concept fits into one of these.
Who the agent is. Always there, shapes every response.
What the agent can do. Loaded when needed, picked by the LLM.
Hard-coded triggers that bypass LLM decision-making entirely.
Keep rules short. Every token costs attention.
When the agent makes a mistake, add a rule for it.
Use modes to switch between planning vs. coding.
Start by converting your most-used shell commands into skills.
Look for things you repeat: scaffolding, linting, deploys.
Only add more when the agent clearly can't do without it.
Use hooks for things that must happen every time.
Commands are shortcuts, not intelligence. Keep them simple.
If it's critical, make it a hook. Don't hope the LLM remembers.
AI coding agents have many concepts. You only need to remember 3 groups.
Start simple. Add complexity only when the agent clearly fails without it.