More Is Different: Toward a Theory of Emergence in AI-Native Software Ecosystems
A new paper argues AI-native software needs ecosystem-level monitoring as primary governance.
Software engineering researcher Daniel Russo has published a provocative paper titled 'More Is Different: Toward a Theory of Emergence in AI-Native Software Ecosystems' that challenges fundamental assumptions about how we build and understand multi-agent AI systems. The paper identifies a critical gap: while individual AI agents may perform correctly, their interactions within ecosystems lead to unpredictable failures that defy traditional software engineering theories. Russo argues these systems must be studied as complex adaptive systems (CAS), where properties like architectural entropy, cascade failures, and comprehension debt emerge from interactions rather than individual components.
The paper maps Holland's six CAS properties onto observable ecosystem dynamics and distinguishes AI-native systems from traditional architectures like microservices. To measure what Russo calls 'causal emergence,' the work defines micro-level state variables, coarse-graining functions, and a tractable measurement framework. Most significantly, it presents seven falsifiable propositions linking CAS theory to software evolution, challenging or extending Lehman's laws where agent-level assumptions fail.
If confirmed through further research, these findings would demand a radical shift in how we govern AI systems—moving from component-level testing to ecosystem-level monitoring as the primary governance mechanism. The paper forces the software engineering community to confront whether its core assumptions can survive the age of autonomous agents, potentially requiring entirely new theoretical foundations for building reliable AI-native software.
- Proposes treating AI-native ecosystems as complex adaptive systems (CAS) with emergent properties like architectural entropy
- Defines measurement framework for causal emergence with state variables and coarse-graining functions
- Presents seven falsifiable propositions challenging Lehman's laws and suggesting ecosystem-level monitoring as governance
Why It Matters
Could fundamentally change how we build and govern multi-agent AI systems, moving from component testing to ecosystem monitoring.