You can only build safe ASI if ASI is globally banned
AI safety expert claims any path to safe ASI inevitably creates the unsafe version first.
In a provocative essay published on the AI Alignment Forum, Connor Leahy, a leading voice in AI safety, makes a stark argument against the feasibility of developing safe Artificial Superintelligence (ASI) under current conditions. Leahy contends that any technical research agenda aimed at creating a 'controlled' or 'aligned' ASI would, by necessity, uncover the foundational knowledge required to build an 'unsafe' ASI first. He asserts that unsafe ASI is 'vastly easier to build than controlled ASI' and lies on the same technological path, creating an unavoidable security dilemma.
Leahy systematically dismisses common proposals like building purely 'tool AI' or 'non-agentic' systems as insufficient safeguards. He argues that the fundamental bottleneck isn't just the technical challenge of alignment, but the impossibility of executing such research without the knowledge leaking or being misused to create the very existential threat one seeks to avoid. The essay concludes that the only viable precondition for even attempting a safe ASI project is a 'global ASI ban and competent enforcement' already being in place, framing unilateral development as a 'pivotal action' tantamount to threatening the world.
The post has sparked significant debate within the AI safety community, highlighted by a top comment from user 'mishka' challenging Leahy's core assumption. The commenter argues that equating 'friendly ASI' with 'ASI one can control' is a contentious logical jump, and that a control-centric agenda might itself be a path to ruin. This exchange underscores the deep philosophical and strategic divides within the field regarding how to approach the superintelligence problem.
- Leahy's core argument: The research path to 'controlled' ASI inevitably reveals how to build 'unsafe' ASI, which is easier to create.
- Proposed prerequisite: A 'global ASI ban and powerful enforcement' is necessary before any safe ASI project can be considered.
- Community pushback: A highlighted comment challenges the essay's foundational assumption that 'friendly ASI' must equal 'controllable' ASI.
Why It Matters
This debate frames the strategic dilemma for AI labs and governments: can safe superintelligence be built without first preventing the unsafe version?