AI Safety

On the political feasibility of stopping AI

A new analysis argues society may favor a complete halt over regulation.

Deep Dive

David Scott Krueger, writing on LessWrong, challenges the common assumption that regulating AI is more politically feasible than stopping it entirely. He argues that once society fully grasps the existential risks—like a 10% chance of human extinction within a decade—policies such as banning advanced AI chips will seem moderate rather than extreme. Krueger emphasizes that people will naturally gravitate toward simple, intuitive solutions over complex regulatory frameworks that require expert knowledge.

Krueger identifies three key reasons for this shift: widespread concern about AI's broader risks (job loss, mass surveillance, power concentration), the appeal of the 'Keep It Simple, Stupid' principle (a clear halt is easier to trust than intricate rules), and a deep-seated preference for humans remaining relevant. He predicts a narrow window between public apathy and overwhelming demand for drastic action, suggesting that once alarm bells ring, stopping AI could become the default political response.

Key Points
  • Krueger argues banning advanced AI chips may seem moderate once society recognizes a 10% extinction risk within 10 years.
  • He cites three drivers: concerns about job loss/mass surveillance, preference for simple solutions, and desire for human relevance.
  • Predicts a narrow window between societal indifference and demand for a complete halt, bypassing complex regulation.

Why It Matters

This analysis reframes AI governance debates, suggesting a complete halt could be more politically viable than regulation.