Operational Agency: A Permeable Legal Fiction for Tracing Culpability in AI Systems
New legal framework uses AI's goal-directedness and predictive processing as proxies for intent and foresight.
Researchers Anirban Mukherjee and Hannah Hanwen Chang have introduced a groundbreaking legal framework called 'Operational Agency' (OA) designed to solve the accountability paradox in AI systems. Published in arXiv:2602.17932 and forthcoming in the SMU Science and Technology Law Review, the framework addresses how highly autonomous AI systems that lack legal personhood can be held accountable under human-centric legal doctrines.
The core innovation is treating AI's 'Operational Agency' as a permeable legal fiction—an ex post evidentiary tool rather than granting personhood. The framework analyzes three observable characteristics: goal-directedness (as a proxy for intent), predictive processing (as a proxy for foresight), and safety architecture (as a proxy for standard of care). These metrics are operationalized through an 'Operational Agency Graph' (OAG), a causal mapping tool that traces responsibility across the AI lifecycle—from developers and fine-tuners to deployers and end-users.
The research draws on established legal doctrines including corporate criminal liability, the innocent-agent doctrine, and vicarious liability. It demonstrates the framework's application across five diverse case studies spanning tort law (autonomous vehicle collisions), civil rights, constitutional law, and antitrust (algorithmic price-fixing). This provides courts with a principled method to apportion blame and offers legislatures and industry a conceptual foundation for regulation. The approach ensures human accountability scales with technological autonomy without creating the legal complexities of AI personhood.
- Proposes 'Operational Agency' (OA) as a legal fiction to evaluate AI systems using goal-directedness, predictive processing, and safety architecture as proxies for intent, foresight, and care.
- Introduces 'Operational Agency Graph' (OAG) tool to map causal responsibility across developers, fine-tuners, deployers, and users in AI incidents.
- Tested across five real-world case studies including autonomous vehicle collisions and algorithmic price-fixing, providing courts with an evidentiary method for complex AI liability cases.
Why It Matters
Provides courts and regulators with a practical framework to assign legal responsibility for AI harms without granting AI systems personhood.