Media & Culture

'That branch of AI is lethal. We've got to do something about that' — Neil deGrasse Tyson wants to ban AI superintelligence

The astrophysicist warns uncontrolled superintelligence poses risks on par with nuclear weapons.

Deep Dive

Astrophysicist Neil deGrasse Tyson, in a viral talk circulating online, has issued a stark warning calling for a global treaty to ban the development of artificial superintelligence (ASI). He distinguishes this hypothetical future AI—which would surpass human intelligence across nearly all cognitive domains—from current tools like chatbots, labeling it 'lethal.' Tyson argues the risks of uncontrolled superintelligence are on the scale of nuclear weapons and that 'nobody should build it,' proposing an international agreement as the best mechanism for prevention, despite the inherent challenges of enforcing such a ban on software.

Tyson's intervention taps into a long-running but increasingly mainstream debate between AI safety researchers and accelerationists. Proponents of a ban fear an 'intelligence explosion' where self-improving AI rapidly escapes human control and alignment with human values. The counterargument is that such fears are speculative and could stifle beneficial innovation. Tyson's contribution is notable for its clarity and his specific suggestion of a treaty model, drawing parallels to historical global pacts that managed existential risks from nuclear and chemical weapons.

The core tension highlighted is between the ubiquitous, benign AI used daily for tasks like email drafting or navigation, and the underlying technology's potential to evolve into something uncontrollable. As regulation continues to lag behind the breakneck pace of AI development from companies like OpenAI, Anthropic, and Google, Tyson's call sharpens the question of whether preemptive, coordinated global action is necessary or feasible for a technology that inherently resists containment.

Key Points
  • Neil deGrasse Tyson advocates for an international treaty to ban development of artificial superintelligence (ASI), calling it a 'lethal' branch of AI.
  • He frames the risk as existential, comparable to nuclear weapons, requiring global cooperation to prevent any single entity from creating it.
  • The debate highlights the growing tension between rapid AI innovation and long-term safety concerns, moving from academic circles into mainstream discourse.

Why It Matters

A leading science communicator is pushing the AI safety debate toward concrete policy, advocating for preemptive global action on an unproven but potentially catastrophic risk.