Thinking in Promises: Designing Systems for Cooperation
books

Thinking in Promises: Designing Systems for Cooperation

Mark Burgess

28 highlights
bigideas-concepts agentic-concepts 2026-roadmap-reflection agentic-philosophy-traces agentic-design-patterns agentic-ambient-agents agentic-insight software-design

Highlights & Annotations

Burgess opens with a disarmingly ambitious premise: “Imagine a set of principles that could help you understand how parts combine to become a whole, and how each part sees the whole from its own perspective. If such principles were any good, it shouldn’t matter whether we’re talking about humans in a team, birds in a flock, computers in a data center, or cogs in a Swiss watch.” This is not mere analogy-hunting. Promise Theory aspires to be a universal grammar of cooperation — one language that makes no ontological distinction between a person volunteering for a task and a microservice declaring an API endpoint. The implication is radical: the mechanisms that make a DevOps pipeline reliable are not merely similar to the mechanisms that make a human organization functional — they are, at some formal level, the same. If this is true, then the walls we build between “management theory” and “systems engineering” are not just unhelpful but actively misleading. Every coordination problem, from marriage to microservices, reduces to the same question: what has each part promised, and to whom?

#1 The Universal Solvent: A Single Grammar for Humans and Machines [Principle]

Ref. 811E-A

“The goal of Promise Theory is to reveal the behaviour of a whole from the sum of its parts, taking the viewpoint of the parts rather than the whole. In other words, it is a bottom-up constructionist view of the world.” Most organizational and engineering methodologies begin at the top — with requirements, architecture diagrams, org charts — and then decompose downward. Burgess inverts this entirely. The bottom-up stance is not a stylistic preference but an epistemic necessity: the parts always know more about their own capacities than any central planner can. This mirrors the insight from complexity science that emergent behavior cannot be predicted from top-down specification alone. When we design systems top-down, we are essentially writing fiction about what the parts will do. When we design bottom-up from promises, we are documenting what the parts can actually deliver. The difference is the difference between a wish and a warranty.

#2 The Copernican Inversion: Bottom-Up as the Only Honest Direction [Principle]

Ref. 06E9-B

Burgess contrasts imperative instructions (“Wash the floor with agent X. Mop and brush the bowls.”) with promise formulations (“I promise that the floor will be clean and dry after hourly checks.”). The shift seems cosmetic until you notice what changes: the imperative version prescribes process; the promise version declares outcome. As Burgess writes, commands “tell something how to behave instead of what we want (i.e., they document the process rather than the outcome), and they force you to follow an entire recipe of steps before you can even know what the intention was.” This is the restroom as epistemological parable. When you specify process, the worker becomes a machine executing steps; when you specify outcome, the worker becomes an agent exercising judgment. The promise formulation implicitly grants autonomy — and in doing so, it also assigns responsibility.

#3 The Restroom Revelation: Outcomes over Algorithms [Revelation]

Ref. 16AA-C

“Commands fan out into unpredictable outcomes from definite beginnings, and we go from a certain state to an uncertain one. Promises converge towards a definite outcome from unpredictable beginnings, leading to improved certainty.” This is perhaps the chapter’s most visually powerful idea. A command is a trajectory launched from a known starting point into an unknown future — a cannon shot. A promise is a gravitational attractor pulling from an unknown present toward a declared destination — a homing signal. In distributed systems, the starting state is almost always unknown or stale by the time a command arrives. The promise model acknowledges this: it does not care where you are, only where you are going.

#4 The Arrow of Certainty: Why Promises Converge and Commands Diverge [Wisdom]

Ref. 9C7C-D

“Promises are local, whereas obligations are distributed (nonlocal).” Burgess draws an explicit parallel to physics: the reason promises are more fundamental than obligations is the same reason local interactions are more fundamental than action-at-a-distance in modern physics. A promise is made by the agent that controls the relevant behavior — causation and declaration are co-located. An obligation, by contrast, originates externally and must traverse the gap between the obliger and the obliged. That gap is where information loss, misunderstanding, and enforcement failure live.

#5 The Locality Principle: Promises as Applied Physics [Principle]

Ref. 3A95-E

“An agent is autonomous if it controls its own destiny (i.e., outcomes are a result of its own directives, and no one else’s).” Burgess argues that dividing the world into autonomous parts gives us “a head start on causality.” This is a profound reframing: autonomy is not a political ideal or a luxury of trust — it is an engineering necessity. When something goes wrong in a system of autonomous promise-keepers, you know the fault lies within the agent that broke its promise. Autonomy is not the absence of coordination; it is the precondition for diagnosable coordination. The autonomy dividend is debuggability.

#6 The Autonomy Dividend: Self-Control as the Foundation of Reliability [Wisdom]

Ref. B87B-F

“Every possible observer, privy to relevant information, always gets to make an independent assessment of whether a promise was kept or not.” Burgess illustrates this with Alice, Bob, and Carol: Alice promises Bob she paid him; Carol witnessed the transfer but Bob hasn’t checked his account. Each observer holds a different fragment of evidence and therefore reaches a different (equally valid) assessment. This is not relativism — it is perspectivism grounded in information theory. The implications for system design are immediate: monitoring, alerting, and auditing are not neutral reflections of reality but observer-specific assessments based on the information available to each monitoring agent.

#7 The Observer’s Throne: Assessment as Independent Judgment [Principle]

Ref. 84C1-G

Burgess observes that because “each autonomous agent has its own independent view,” agents “form expectations independently” and can “make judgements without waiting to verify outcomes. This is how we use promises in tandem with trust.” Promises do not eliminate uncertainty — they structure it. When an agent makes a promise, other agents can form expectations and proceed without waiting to verify. This is the very mechanism of trust: acting on an unverified promise because the cost of waiting for certainty exceeds the risk of proceeding on expectation. The key insight is that trust is not a feeling but a computational strategy — a way of reducing coordination overhead by substituting local expectation for global verification.

#8 The Expectation Engine: How Promises Enable Trust Without Verification [Wisdom]

Ref. 3EA0-H

“If you imagine a cookbook, each page usually starts with a promise of what the outcome will look like (in the form of a seductive picture), and then includes a suggested recipe. It does not merely throw a recipe at you, forcing you through the steps to discover the outcome on trust. It sets your expectations first. In computing programming, and in management, we are not always so helpful.” This is a deceptively simple observation that indicts vast quantities of technical documentation, project plans, and software specifications. How many Jira tickets describe a sequence of steps without ever stating what the world should look like when they are done? The cookbook principle says: always lead with the promise (the photo of the finished dish), then offer the recipe as one possible path.

#9 The Cookbook Principle: Setting Expectations Before Prescribing Steps [Strategy]

Ref. 60C5-I

Burgess quotes the service-desk wisdom: “Don’t tell me what you are doing, tell me what you are trying to achieve! What you are actually doing might not be at all related to what you are trying to achieve.” Promises fit naturally with the idea of services precisely because a service is a declared capability, not a described process. When a team member says “I’m refactoring the authentication module,” that is a description of activity. When they say “I promise the login flow will handle SSO within two weeks,” that is a promise. The first is opaque to assessment; the second is transparent. The shift from activity-reporting to promise-making is a shift from information-hiding to information-sharing.

#10 The Service Epiphany: Don’t Tell Me What You’re Doing [Strategy]

Ref. 9CEF-J

Burgess illustrates with the “tomaetoe/tomahtoe” parable: an American mother and English father each impose their pronunciation on a child. “Because the source of intent is not the child, there is nothing the child can do to resolve the conflict; the problem lies outside of her domain of control.” Obligations are nonlocal — they originate outside the agent that must comply — and this nonlocality makes conflicts structurally unresolvable by the obligated party. In promise-land, the child simply promises to say one thing to Mum and another to Dad, resolving the conflict locally. Every time you impose a requirement on a component from the outside, you are potentially creating a conflict that the component cannot solve.

#11 The Nonlocality Trap: Why Obligations Create Unresolvable Conflicts [Pattern]

Ref. F409-K

“Even a computer follows instructions only because it was constructed voluntarily to do so. If we change that promise by pulling out its input wires, it no longer does.” Burgess dismantles the illusion that any system — human or mechanical — can be forced into compliance. What appears as forced compliance is actually voluntary cooperation with an external signal. The computer obeys because its circuits were designed to obey; the soldier obeys because she has internalized a promise to follow orders. Remove the underlying promise (pull the wire, break the oath), and the compliance vanishes. Systems that depend on force are systems that depend on an invisible, undocumented promise to accept force.

#12 The Illusion of Force: Even Obedience Is Voluntary [Revelation]

Ref. 4F44-L

“A requirement is an obligation from a place of high-level generalization onto a place of more expert execution. There is an immediate information gap or disconnect between requirer and requiree. The important information about the likely outcome is at the wrong end of that gap.” This is a devastating critique of traditional requirements engineering. The person who writes the requirement knows the least about what is actually possible; the person who must fulfill it knows the most. Requirements flow downhill from ignorance to expertise, while the information needed to evaluate feasibility flows uphill from expertise to ignorance. The promise perspective is not just more honest — it is more informed, because it originates where the information actually lives.

#13 The Information Gap: Requirements as Fiction Written at the Wrong Address [Revelation]

Ref. C41C-M

“Promise Theory is also a kind of atomic theory. It encourages us to break problems down into a table of elements (basic promises), from which any substantial outcome can be put together like a chemistry of intentions.” Burgess draws a parallel between promises and chemical elements: just as you cannot invent a new element by wishing, you cannot invent new capabilities by imposing requirements. “Imagine designing a plane that requires a metal with twice the strength of steel but half the weight of aluminium. You can try writing to Santa Claus to get it for Christmas, but the laws of physics sort of get in the way.” The atomic metaphor disciplines ambition: instead of dreaming about what we want, we inventory what is actually promised by the components available to us.

#14 The Atomic Metaphor: Promises as the Periodic Table of Intent [Principle]

Ref. 8899-N

“When you work from the bottom up, you have no choice but to know where things are because you will need to document every assumption with an explicit promise. Thus, a promise approach forces a discipline.” Top-down thinking allows you to make assumptions about capabilities that may not exist in the places you assume they do. Bottom-up promise-thinking forbids this: every assumption must be backed by a declared promise from a specific agent. This forced explicitness is not bureaucracy — it is epistemic hygiene. In distributed systems, the most common source of failure is undocumented assumptions. Promises make these invisible dependencies visible.

#15 The Discipline of Documentation: Promises Force Explicitness [Strategy]

Ref. 1433-O

“Thinking in promises also makes you think about contingency plans. What if your first assumptions fail?” This observation, almost offhand in the text, captures a crucial secondary effect of the promise mindset. When you command someone to do something, you implicitly assume success — the command model has no natural place for failure. When you promise something, you immediately confront the question of what happens when the promise is broken. Every promise carries its shadow: the possibility of non-fulfillment. This built-in awareness of fragility is what makes promise-based systems more resilient than command-based systems.

#16 The Contingency Imperative: Promises Make You Plan for Failure [Strategy]

Ref. D274-P

“Semantics are about how we interpret something: what does it mean, what is its function, what significance do we attach to it? Semantics are subjective (i.e., in the eye of the beholder); hence one agent might assess a promise to be kept, while another assesses it to be not kept, based on the same dynamical data.” This distinction is the philosophical backbone of the chapter. Two monitoring systems can observe the same 99.5% uptime metric and reach opposite conclusions — one seeing a promise kept, the other seeing it broken — because they attach different meaning to the same measurement. No system design is complete without specifying not just what will be measured but what those measurements mean to each observer.

#17 The Semantics-Dynamics Divide: Meaning Is Always in the Eye of the Beholder [Principle]

Ref. 3529-Q

“We call this voluntary cooperation. For humans, the economics are social, professional, and economic.” Burgess insists that all cooperation is ultimately voluntary, even in hierarchical command structures. Military obedience is not forced compliance but “a consensus to voluntarily follow orders.” If you want a system (human or machine) to cooperate reliably, you must understand and maintain the incentive structure that makes cooperation worthwhile. Pulling out the incentive is equivalent to pulling out the wire. The manager who says “just make it happen” is not invoking a mechanism of force; she is invoking an undocumented promise from the employee to comply — a promise that can be withdrawn at any time.

#18 The Voluntary Foundation: Cooperation Cannot Be Coerced Into Existence [Wisdom]

Ref. 6C3E-R

“Promise Theory is rather good at resolving conflicts because an agent can only conflict with itself, hence all the information to resolve the conflicts is located in the same place.” This is perhaps the most elegant structural consequence of promise-based thinking. In obligation-based systems, conflicts arise between agents, and resolution requires a mediator with access to both perspectives. In promise-based systems, the only possible conflict is an agent making contradictory promises to itself, and since all the information is local to that agent, resolution is always tractable. When you find yourself mediating between conflicting requirements, the promise response is to push the conflict down to the agent that must actually deliver.

#19 The Conflict Localization Theorem: Self-Conflict as the Only Kind That Matters [Principle]

Ref. BA24-S

“The promise position is an extreme position, one that you might object to on some grounds of convention. It is because it is an extreme position that it is useful. If we assume this, we can reconstruct any other shade of compliance with outside influence by documenting it as a promise. But once we’ve opened the door to doubt, there is no going back. That’s why this is the only rational choice for building a theory that has any predictive power.” Burgess defends the radical stance not as ideology but as methodological necessity. By starting from the extreme assumption that no agent can be forced to do anything, and that all cooperation is voluntary, you create a model that can represent every possible degree of compliance as a special case. The extreme position is not a political statement — it is the logical foundation that gives the theory its generality.

#20 The Extreme Position: Why Radicalism Is the Only Rational Starting Point [Wisdom]

Ref. D065-T

Burgess’s commands vs. promises distinction is the workflow vs. agentic distinction, translated from distributed systems into LLM architecture. A workflow is exactly what Burgess calls an imperative recipe: “Wash the floor with agent X. Mop and brush the bowls.” It prescribes process, and you must follow the entire sequence before you know if the intent was achieved. The workflow designer sits at the top, writing requirements (obligations) that flow downward onto execution nodes that may or may not be able to fulfill them. An agentic system is closer to Burgess’s promise model: you give the agent an outcome (“I promise the floor will be clean”), and the agent — possessing local knowledge of its own capabilities, tools, and current state — autonomously decides how to achieve it. Commands map to workflow steps; promises map to agent tool-use decisions. Top-down requirements map to orchestrators prescribing steps; bottom-up promises map to agents reasoning about their own capabilities.

#21 The Core Isomorphism: Commands vs Promises IS Workflows vs Agents [Principle]

Ref. B96B-U

Burgess’s most visually powerful idea — that commands diverge from known starts into unknown outcomes, while promises converge from unknown starts toward known outcomes — maps directly onto the brittleness problem of workflows. A workflow launches from a known starting state along a predetermined path. But if any step encounters unexpected state — a malformed API response, an ambiguous user input, a tool returning something the flowchart didn’t anticipate — the trajectory diverges into unhandled territory. This is exactly why agentic systems may fail gracefully where workflows fail catastrophically. The workflow is a cannon shot. An agentic system doesn’t care where it starts. It has a goal (the promise) and iteratively corrects toward it. If one tool call fails, it reasons about the failure and tries an alternative. The agent’s observe-plan-act loop is precisely the mechanism by which promises self-repair. Convergence is not a metaphor — it is the architectural property that makes agents robust.

#22 The Arrow of Certainty Applied: Why Workflows Are Cannon Shots and Agents Are Homing Signals [Wisdom]

Ref. FA34-V

Burgess writes: “Promises are local, whereas obligations are distributed (nonlocal).” This is the architectural reason agentic systems are more robust than workflows for open-ended tasks. In a workflow, the intelligence about what should happen is nonlocal — it lives in the orchestrator (Temporal, Step Functions, the DAG designer), not in the execution nodes. When something goes wrong at step 4 of 7, the execution node doesn’t know why it was called or what the overall goal is. It cannot self-correct because the intent is elsewhere. This is Burgess’s nonlocality trap: “the problem lies outside of her domain of control.” In an agentic system, the intelligence is local to the agent. The LLM holds the goal, the context, the reasoning, and the tool-use capability in a single place. When something goes wrong, the agent has all the information needed to reason about the failure and attempt repair. This is the autonomy dividend: debuggability and self-repair emerge from co-locating intent and execution.

#23 The Locality Principle Is Everything: Why Agents Self-Repair and Workflows Cannot [Principle]

Ref. 708B-W

Burgess’s critique — “A requirement is an obligation from a place of high-level generalization onto a place of more expert execution. The important information about the likely outcome is at the wrong end of that gap” — is a devastating indictment of workflow design for complex tasks. The person designing the workflow is making decisions about execution order, error handling, branching logic in advance, without access to the runtime context the agent will have. They are the requirer writing requirements onto the requiree (the execution engine), and the information about what will actually work is at the wrong end of the gap. An agentic system collapses this gap entirely: the entity deciding what to do next (the LLM) is the same entity that has access to the current state, the tool outputs, and the goal. The decider and the doer are co-located. This is not just a convenience — it is an information-theoretic advantage that no amount of workflow branching logic can replicate.

#24 The Information Gap: Workflow Designers Write Fiction About Runtime [Revelation]

Ref. 1152-X

Burgess says something crucial: “The promise position is an extreme position… If we assume this, we can reconstruct any other shade of compliance with outside influence by documenting it as a promise.” This suggests that the workflow-vs-agent debate is a false binary. A workflow is a system where agents have pre-committed to a specific set of promises in a specific order — autonomy exercised at design time and then crystallized. An agentic system is a system where agents make promises at runtime, dynamically, based on current state — autonomy exercised continuously. The Anthropic hybrid pattern (workflows for predictable subtasks, agents for open-ended reasoning) maps perfectly: some promises are best made in advance (deterministic steps), others must be made in the moment (reasoning under uncertainty). The real question is not “workflow or agent?” but “where should autonomy live and when should it be exercised?”

#25 The False Binary: Workflows Are Crystallized Promises, Agents Are Dynamic Ones [Wisdom]

Ref. BA60-Y

Burgess’s insight that even military obedience is “a consensus to voluntarily follow orders” has a direct analog in agentic design: when an agent uses a tool, it is voluntarily invoking it. The system prompt and tool descriptions are not commands to the agent — they are advertisements of available promises. The tool says “I promise to return search results if you give me a query.” The agent decides whether to accept that promise. This framing reveals something important about tool design for agents: tools should be designed as promise-makers, not as command-receivers. A well-designed tool declares its capabilities and limitations (its promises); a poorly designed tool expects to be called in a specific way (it demands an imposition). The difference determines how gracefully the agent can reason about when and whether to use it. Tool descriptions are promise advertisements. System prompts are invitations, not mandates.

#26 Tools as Promise-Makers: Voluntary Cooperation in Agent Tool Design [Strategy]

Ref. 0837-Z

Burgess’s conflict localization theorem — “an agent can only conflict with itself, hence all the information to resolve the conflicts is located in the same place” — becomes critical in multi-agent systems (CrewAI, AutoGen, LangGraph). In a workflow orchestrating multiple LLMs, conflicts between agents must be resolved by the orchestrator — a nonlocal mediator. If Agent A’s output contradicts Agent B’s expectations, someone external must arbitrate. In a promise-based multi-agent system, each agent makes promises about its own outputs and accepts or rejects others’ promises. Conflicts are localized: if an agent receives contradictory inputs, it resolves them locally by choosing which promises to honor, based on its own assessment. This maps to the real architectural challenge: do you use a centralized orchestrator (workflow pattern, nonlocal conflict resolution) or let agents negotiate directly (promise pattern, local conflict resolution)?

#27 The Conflict Localization Theorem for Multi-Agent Systems [Principle]

Ref. BAEB-A

The deepest lesson from Promise Theory for agentic design: the reason agents work better than workflows for open-ended tasks is not that they are smarter — it is that they co-locate intent, information, and control at the point of execution. For tasks where the execution environment is predictable and the steps are well-understood, pre-committing to a workflow (crystallized promises) is efficient. For tasks where the environment is uncertain, the information needed to succeed is only available at runtime, and failure modes are unpredictable, you need runtime autonomy (dynamic promises). This is not an AI insight. It is a physics insight about locality. Burgess figured it out from distributed computing in 2004, twenty years before the LLM agent debate. The question was never about intelligence — it was about where information lives relative to where decisions are made.

#28 The Physics of Agency: Co-Location of Intent, Information, and Control [Wisdom]

Ref. 9680-B