Models & Releases

Neil DeGrasse Tyson calls for an international treaty to ban superintelligence: "That branch of AI is lethal. We've got do something about that. Nobody should build it. And everyone needs to agree to that by treaty. Treaties are not perfect, but they are the best we have as humans."

The astrophysicist warns superintelligent AI is 'lethal' and demands a global ban via treaty.

Deep Dive

Astrophysicist and science communicator Neil deGrasse Tyson has entered the AI safety debate with a stark warning, calling for an international treaty to ban the development of superintelligent AI. In a recent statement, Tyson declared, "That branch of AI is lethal. We've got to do something about that. Nobody should build it. And everyone needs to agree to that by treaty." His position frames superintelligence not as a technological milestone but as an existential threat requiring preemptive, coordinated global action. This aligns him with other prominent figures in the 'AI pause' or safety-first camp, though his solution is explicitly legal and diplomatic.

Tyson acknowledges the limitations of his proposed solution, stating, "Treaties are not perfect, but they are the best we have as humans." This reflects a pragmatic view of international governance, recognizing that a ban would be difficult to enforce but arguing it's the most viable mechanism for global coordination on an issue that transcends national borders. His call adds significant mainstream scientific credibility to concerns often voiced by AI researchers and tech executives. The push for a treaty contrasts with current industry-led voluntary safety commitments and national regulatory approaches, proposing a more binding and universal framework.

The challenge lies in defining 'superintelligence' in treaty language and achieving consensus among competing nations and corporations racing for AI supremacy. Critics of such bans argue they could stifle beneficial AI research and are unenforceable. However, Tyson's intervention elevates the discussion beyond technical circles, framing AI governance as a critical issue for all of humanity, akin to nuclear non-proliferation. His stance will likely fuel ongoing discussions about whether advanced AI development should be governed by market forces, national regulation, or international law.

Key Points
  • Neil deGrasse Tyson explicitly labels superintelligent AI as 'lethal' and calls for a complete development ban.
  • He proposes an international treaty as the enforcement mechanism, citing it as humanity's best tool for global coordination.
  • This adds a major public science figure's voice to AI safety debates, pushing for binding legal frameworks over voluntary guidelines.

Why It Matters

It pressures policymakers to consider binding international law, not just voluntary codes, for governing existential AI risks.