The Cartesian Cut in Agentic AI
New research argues today's AI agents use a brittle 'Cartesian' architecture that externalizes control.
Researchers Tim Sainburg and Caleb Weinreb have published a conceptual paper, 'The Cartesian Cut in Agentic AI,' that diagnoses a fundamental architectural pattern in today's AI agents. They argue that systems built by coupling a large language model (LLM) like GPT-4 or Claude to an external runtime (e.g., for tool use or planning) create a 'Cartesian' split. This design externalizes control state and policies into the engineered system, turning the LLM's predictive capability into goal-oriented behavior. While this split enables bootstrapping, modularity, and easier governance, it introduces significant downsides: the interface becomes a bottleneck and the system grows sensitive to failures at this symbolic boundary.
The paper contrasts this dominant 'Cartesian agent' pattern with two other paradigms: 'bounded services' (where an LLM performs a single, well-defined task) and the aspirational 'integrated agents.' The latter would more closely mimic biological systems, where prediction is embedded within layered feedback controllers calibrated by the consequences of action—a more robust but less modular approach. The authors outline how these three approaches trade off autonomy, robustness, and oversight. This framework provides a crucial vocabulary for engineers and researchers to discuss the core design choices that will define the next generation of agentic systems, moving beyond simply scaling model size.
- Defines 'Cartesian Agency' as the dominant but brittle pattern where an LLM's control is externalized to a runtime via a symbolic interface.
- Contrasts this with biological feedback systems, where prediction and control are integrated and calibrated by action.
- Proposes a framework of three agent types—Bounded Services, Cartesian Agents, Integrated Agents—trading off modularity, robustness, and autonomy.
Why It Matters
Provides a critical framework for designing more robust and capable AI agents, moving beyond simple LLM chaining.