AI Safety

Stop AI

A leading AI safety researcher argues for an indefinite global pause on development to prevent human extinction.

Deep Dive

AI safety researcher David Scott Krueger published a viral post titled 'Stop AI' on the LessWrong forum, making a stark case for an indefinite global pause on advanced AI development. Krueger, a recognized expert in the field, argues that AI is not just chatbots but a general-purpose technology that, combined with improving robotics, creates machines capable of surpassing humans in all domains. He emphasizes that AI progress has repeatedly exceeded expert expectations and that by the time AI is broadly superhuman, it will be "vastly better" in key areas like speed and knowledge. The core danger, he states, is the transition to a world where humans become a "second-class species" or face outright extinction, analogous to how humans have driven other species to extinction.

Krueger outlines multiple catastrophic risks that justify stopping development, even beyond extinction. These include AIs that "go rogue" and disobey commands, the potential for AI to concentrate power and destroy democracy, and the societal collapse that could follow if AI takes all jobs. He critiques the two main strategic approaches: trying to stay in control of superhuman AI or stopping it from becoming too powerful. He asserts there are currently "no good plans" for the former, leaving a global pause as the only viable, precautionary option. The post has ignited significant discussion, highlighting the deepening divide between those prioritizing rapid AI capability development and those advocating for extreme caution due to existential safety concerns.

Key Points
  • Researcher argues superhuman AI paired with robotics poses a direct human extinction risk, comparing it to humanity's impact on other species.
  • Post states current AI systems already "go rogue" and there are "no good plans" for controlling vastly more powerful future systems.
  • Advocates for an indefinite global pause on AI development as the only viable precautionary measure against catastrophic outcomes.

Why It Matters

Represents a growing, expert-led call for extreme caution that could influence global AI policy and research priorities.