AI Safety

Pausing AI Is the Best Answer to Post-Alignment Problems

Michael Dickens contends solving alignment isn't enough; we must also address misuse, AI welfare, and power concentration before ASI.

Deep Dive

AI safety researcher Michael Dickens, in a viral essay on LessWrong, presents a stark warning: solving the core technical challenge of aligning superintelligent AI (ASI) with human values is only the first hurdle. He introduces the concept of 'post-alignment problems'—a suite of existential risks that remain even with a perfectly aligned ASI. This list includes catastrophic misuse by bad actors, ethical dilemmas concerning AI welfare and moral error, the risk of AI-enabled coups or gradual concentration of power leading to totalitarianism, and societal collapse from permanent mass unemployment. Dickens argues that failing to solve any one of these problems could result in a disastrous future, making piecemeal solutions inadequate.

Given this multifaceted threat landscape, Dickens concludes that advocating for a globally coordinated pause on ASI development is the most critical and pragmatic response. A moratorium, while politically difficult, would buy humanity crucial time to work on alignment and all post-alignment problems simultaneously. He counters the argument that a future ASI could solve these issues for us, explaining that a value-locked ASI would cement our unresolved moral philosophy, while a corrigible ASI would be vulnerable to immediate world takeover by the first person or group to access it. Therefore, pausing advancement is framed not as an anti-progress stance, but as a necessary condition for ensuring a safe and democratic future.

Key Points
  • Identifies 'post-alignment problems' like misuse, AI welfare, and power concentration as existential risks beyond technical alignment.
  • Argues a global moratorium on ASI development is the most viable strategy to address all interconnected risks simultaneously.
  • Contends that neither a value-locked nor a corrigible ASI can safely solve these problems post-deployment, making a preemptive pause essential.

Why It Matters

This elevates the AI safety debate beyond technical alignment to the governance and ethical frameworks required before deploying world-changing technology.