You can only build safe ASI if ASI is globally banned
A viral AI safety post argues that researching 'safe' superintelligence inevitably creates the dangerous kind first.
A provocative argument circulating in AI safety communities posits a stark prerequisite for developing safe artificial superintelligence (ASI): a global ban on ASI development must come first. The central thesis, often attributed to researcher and writer Quintin Pope, asserts that the technical path to creating a controllable, aligned ASI necessarily involves discovering how to build an unsafe, unaligned version. Because building an uncontrolled, powerful AI is considered vastly easier than solving the alignment problem, any research progress inherently creates dangerous knowledge and capabilities long before safety is assured.
This creates an insurmountable coordination problem. The author argues that no entity—whether a company, government, or research lab—can pursue a 'safe ASI' agenda without, in the process, enabling others to build unsafe ASI. This could happen through leaks, defectors, or simply by publishing foundational research that others can weaponize. The post dismisses proposed solutions like extreme institutional secrecy or finding a technically orthogonal research path as practically impossible or insufficient.
The logical conclusion is that the only viable starting condition for safe ASI development is a successfully implemented and enforced global prohibition on ASI-capable research. Without this, any attempt to build superintelligent AI unilaterally is framed as a 'pivotal action'—an aggressive move that threatens global security by racing toward a capability that could be catastrophically misused. The argument shifts the debate from technical alignment puzzles to the immediate need for international governance and restraint, challenging the foundational assumptions of many existing AI safety research programs.
- The argument states that unsafe, unaligned ASI is technically easier to build than safe, controlled ASI, putting it earlier on any development path.
- It claims that researching safe ASI inevitably creates the knowledge and tools to build dangerous ASI, creating an unavoidable proliferation risk.
- The author concludes that a global ban with strong enforcement is the only viable prerequisite, making current unilateral safety research a potential security threat.
Why It Matters
This challenges the core premise of major AI labs' safety efforts and frames AI development as an urgent international governance crisis.