Research & Papers

Some Simple Economics of AGI

112-page paper argues the binding constraint on AGI progress is human verification bandwidth, not intelligence.

Deep Dive

Economists Christian Catalini, Xiang Hui, and Jane Wu have released a foundational 112-page working paper titled 'Some Simple Economics of AGI' on arXiv. The paper presents a stark economic model for the transition to Artificial General Intelligence (AGI), arguing that the primary engine of progress is shifting. For millennia, it was human cognition; now, as AI decouples cognition from biology, the marginal cost of measurable execution falls toward zero. This absorbs labor capturable by metrics—including creative and analytical work—and creates a new binding constraint: human verification bandwidth. The core challenge is no longer generating intelligence but validating, auditing, and underwriting responsibility for the outputs of abundant, cheap AI agents.

The authors model this as the collision of two racing cost curves: a rapidly decaying 'Cost to Automate' and a stagnant, biologically limited 'Cost to Verify.' This structural asymmetry creates a 'Measurability Gap' between what AI agents can execute and what humans can afford to verify. The paper predicts this will drive a shift from skill-biased to measurability-biased technical change, where economic value migrates to 'verification-grade' ground truth, cryptographic provenance, and liability underwriting. It identifies unstable dynamics like the 'Missing Junior Loop' (collapse of apprenticeship) and the 'Codifier's Curse' (experts codifying their own obsolescence). The paper concludes with a practical playbook, urging a race to scale verification infrastructure alongside AI capabilities to avoid a 'Hollow Economy' and instead achieve an 'Augmented Economy' of unbounded discovery.

Key Points
  • Models AGI transition via two cost curves: exponentially falling Cost to Automate vs. flat Cost to Verify, creating a 'Measurability Gap'.
  • Predicts economic rents will migrate to verification systems like cryptographic provenance and liability underwriting, not just AI generation.
  • Warns of unstable dynamics like the 'Codifier's Curse' and a potential 'Hollow Economy' if verification doesn't scale with AI capabilities.

Why It Matters

Provides a crucial economic framework for investors and policymakers, shifting focus from raw AI capability to the essential systems of oversight and verification.