AI Safety

Are we Doomed to an AI Race? Why Self-Interest Could Drive Countries Towards a Moratorium on Superintelligence

New research flips the AI race narrative on its head.

Deep Dive

The authors use game theory to argue that a moratorium on Artificial Superintelligence (ASI) can align with national self-interest. By modeling trade-offs between technological supremacy and catastrophic risks, they show that as the perceived cost of losing control grows sufficiently high relative to other parameters, it becomes in each state’s self-interest to impose a moratorium. Rising global perception of ASI risk makes such a coordinated halt increasingly plausible.

Key Points
  • Game theory model formalizes trade-off between ASI supremacy benefits and catastrophic loss-of-control costs.
  • Stable moratorium becomes rational when perceived risk of losing control exceeds a calculable threshold.
  • Empirical evidence shows global perception of ASI risk is rising, making cooperative pause increasingly plausible.

Why It Matters

Challenges the assumption that AI races are inevitable; offers a rational path to global cooperation on ASI safety.