Evaluating Bounded Superintelligent Authority in Multi-Level Governance: A Framework for Governance Under Radical Capability Asymmetry
A new framework finds our political systems would fail on 4 of 6 key dimensions if run by a superintelligent AI.
A new academic paper by researcher Tony Rost tackles a fundamental blind spot in political theory: the assumption that governors and the governed are cognitively comparable. The paper, titled 'Evaluating Bounded Superintelligent Authority in Multi-Level Governance,' constructs a six-dimensional framework—synthesizing legitimacy, accountability, corrigibility, non-domination, subsidiarity, and institutional resilience—to evaluate governance systems. When applied to a prospective scenario of 'bounded superintelligent authority' (an AI with radically superior capabilities operating under human-set constraints), the framework reveals that existing governance models would structurally fail on four of these six critical dimensions.
Among these failures, two are identified as 'design-tractable' and could potentially be addressed through better institutional engineering. However, the other two—the 'public reason problem' under cognitive incomprehensibility and the 'non-domination problem' under permanent capability asymmetry—are deemed 'theory-requiring.' This means they demand the creation of genuinely new normative and philosophical frameworks, as current political theory is insufficient. A key finding is that under radical asymmetry, checks that are independent in normal systems become correlated points of failure. The paper, hosted on arXiv, forces a long-overdue examination of the foundational assumptions that underpin all modern governance, now that the prospect of superintelligent AI makes them testable.
- The framework identifies structural failures in 4 out of 6 governance dimensions (legitimacy, accountability, corrigibility, non-domination) for a superintelligent authority.
- Two core problems—'public reason' and 'non-domination'—require entirely new political theory, not just better AI system design.
- The analysis shows that independent governance checks fail together under radical capability asymmetry, creating systemic risk.
Why It Matters
This research provides a concrete framework for evaluating the profound political and ethical challenges posed by future, highly capable AI systems.