Cover of The Perception Gap
articles

The Perception Gap

Daniel Jones

11 highlights
agentic-philosophy-traces 2026-roadmap-reflection review bigideas-concepts agentic-product-philosophy agentic-coding agentic-design-patterns agentic-concepts

Highlights & Annotations

This is the central insight of Daniel Jones’s practitioner account of rolling out agentic coding across large enterprises. Jones runs Reync, a consultancy in Northern Europe that helps organizations with what he calls “AI native transformation”—a deliberately broad mandate covering agentic coding for developers, AI-powered products, and agent-driven workflows for non-technical functions. His perspective is shaped not by building tools but by deploying them into the messy reality of organizations with thousands of developers, entrenched workflows, and real feature pressure.

Ref. 75D8-A

The DORA finding makes perfect sense, Jones argues, when you look at it through the lens of the theory of constraints. If you take one part of a system—the code-writing part—and speed it up by an order of magnitude, you don’t get an order of magnitude improvement in throughput. You get a bottleneck somewhere else. The bottleneck might be in your testing infrastructure, your deployment pipeline, your product specification process, or your ability to review and validate AI-generated output. If those upstream and downstream systems aren’t mature enough to handle the increased flow, the result isn’t faster delivery. It’s a pile-up.

Ref. 84E6-B

What makes Jones’s account valuable is not that he’s discovered something theoretically novel. The theory of constraints has been understood for decades. What he offers is the practitioner’s map of where exactly the constraints appear, why organizations don’t see them coming, and what the preconditions are for agentic coding to actually deliver on its promise. The answer, it turns out, has less to do with which tools you choose and more to do with whether your agents can perceive the consequences of their actions.

Ref. 4D2A-C

Jones’s training approach makes this concrete. After exercises with coding agents, he draws up the component architecture on a whiteboard: where the tool definitions live, how they get to the model, what the agent loop actually does. This cements the understanding that when something goes wrong, the diagnosis depends on knowing which layer failed. A hallucination is a model problem. A failure to use the right tool is potentially an agent configuration problem. A failure to detect broken code is a perception problem—and that one is on you.

Ref. 047A-D

Perception as the Organizing Principle

Ref. 3996-E

The Agent-Model Distinction

Ref. 6F2B-G

The Amplification Trap

Ref. 1825-H

Tests as Agent Perception

Ref. C22C-J

PATTERN: THE TEST OWNERSHIP SPLIT

Ref. AF07-K