001 · Architecture
Why agents need a Canon before they need a prompt.
Every team that buys its first agent asks the same question. What's the prompt? They treat the prompt as the instruction layer, the thing the agent reads to know what to do. That's the wrong mental model, and it quietly breaks every deployment that scales past three workflows.
The prompt is a trigger. The Canon is the instruction. The distinction matters because it determines whether the system compounds or collapses the first time you add a second agent to the pod.
What a Canon actually is.
A Canon is the structured, versioned, governed body of operating principles, domain knowledge, business context, and decision boundaries that every agent in the system reads from before it does anything else. Prompts call the Canon. They don't replace it.
If you've deployed agents in production for any length of time, you've already felt the absence of one. It shows up as drift. Two agents give contradictory answers on the same question. A third agent forgets a constraint the first one was following. The team starts pasting the same context into every prompt, which is the tell.
The moment you paste the same paragraph into two different prompts, you have discovered the need for a Canon. The only question is whether you'll build one intentionally or accumulate one accidentally.
Why prompts-first breaks.
Prompts scale poorly because they live inside the workflow that calls them. Change the workflow, rewrite the prompt. Add an agent, write a new prompt that has to include all the same operating principles the first prompt included, or the two agents disagree. Multiply by twenty workflows and you have a maintenance problem that eats your build capacity.
A Canon inverts the relationship. The operating principles live once, in one place. Prompts reference them. Change a principle in the Canon and every agent, every workflow, every decision inherits the change on the next run.
The PE-readiness problem.
There's a second reason this matters, and it's the one that actually sells the architecture to boards. Canons are auditable. Prompts are not. When a board member asks why the system made a particular decision, the answer "because the prompt said so" is not an answer. The answer "because Canon v2.4 section 3 defined the decision boundary, and here is the trace" is an answer.
This is not a theoretical concern. PE-backed deployments live or die on governance. The firms funding AI transformation mandates in 2026 have been burned by enough opaque systems to know that unauditable automation is a liability on the balance sheet. A Canon-first architecture is not a technical preference. It is a fiduciary one.
Where to start.
Before you write another prompt, open a document. Call it Canon. Write down the three to five operating principles that govern every decision your agents will make. Write down the facts about the business they need to know. Write down the decision boundaries they cannot cross. Commit it to a repository. Version it.
Then rewrite your prompts to read from it. Not inline. By reference. You will feel the difference the first time you change a single principle and watch every agent in the pod update at once. That is what compounding looks like.