Different models for different kinds of thinking
Musketeer is a coordination and handoff discipline for working with multiple AI models. It exists because grinding one model for everything is a mistake.
One model holds your conversation and forms intent. Another executes bounded tasks. A third observes and reports. Each does what it is good at. The human decides when to hand off.
The trio
Originator
Conversational. Holds the long-running context. Forms intent. Refines constraints. Prepares handoffs.
Executor
Bounded. Receives clear instructions. Produces artifacts. Does what it is told, within constraints.
Cross-Examiner
Observational. Reviews output against intent. Reports what it sees. Does not fix, only observes.
What this is about
Musketeer is not about automation. It is about clarity before action. Accountability between models. Minimizing wasted tokens. Choosing the right model for the right kind of thinking.
The site explains a way of working learned through daily practice, not benchmarks. If you have felt that something is wrong with how you use AI but could not name it, this may be the articulation you need.
- Why this exists - the crack that practitioners feel
- Cost and cognitive load - why tokens and attention matter
- How it feels in practice - the lived workflow
- Relationship to other tools - boundaries and integration
The CLI
Musketeer includes a CLI tool that supports this way of working. It helps you structure handoffs, track intent, and maintain clarity across models.
The CLI is the mechanical implementation. This site defines the philosophy. Start with the philosophy; the tooling will make sense after.