Neil DeGrasse Tyson calls for an international treaty to ban superintelligence: "That branch of AI is lethal. We've got do something about that. Nobody should build it. And everyone needs to agree to that by treaty. Treaties are not perfect, but they are the best we have as humans."
Astrophysicist warns superintelligent AI is 'lethal' and demands a global treaty to prevent its creation.
Astrophysicist and science communicator Neil deGrasse Tyson has entered the AI safety debate with a stark warning, calling for an international treaty to ban the development of superintelligent AI. In a recent statement, Tyson labeled this advanced branch of artificial intelligence as "lethal," arguing that its potential risks are too great to manage through national regulations or corporate ethics boards alone. He emphasized that "nobody should build it" and that a coordinated, global prohibition is necessary, acknowledging that while treaties are "not perfect," they represent "the best we have as humans" to prevent catastrophic outcomes.
Tyson's intervention adds significant mainstream scientific weight to a discussion often confined to AI researchers and tech ethicists. His call mirrors concerns from figures like Geoffrey Hinton and Yoshua Bengio about the existential risks posed by artificial general intelligence (AGI) or superintelligence—AI systems that could surpass human cognitive abilities. The proposal for a treaty suggests a move beyond voluntary safety pledges from companies like OpenAI, Anthropic, and Google DeepMind, advocating instead for a legally binding international framework akin to those governing nuclear non-proliferation or biological weapons.
The challenge lies in defining 'superintelligence' for a treaty and enforcing a global ban in a competitive geopolitical and commercial landscape. Nations and corporations racing for AI supremacy may be reluctant to sign such an agreement. However, Tyson's public stance could galvanize broader political and public support for preemptive governance, shifting the conversation from how to build safe superintelligence to whether it should be built at all.
- Neil deGrasse Tyson publicly called for an international treaty to ban superintelligent AI development, labeling it a 'lethal' technology.
- He argues a global agreement, modeled on treaties for other existential risks, is humanity's best tool for prevention despite being imperfect.
- This adds a prominent mainstream science voice to AI safety debates, pushing for binding governance over voluntary corporate pledges.
Why It Matters
A leading scientist's treaty call elevates AI risk to a global policy issue, pressuring governments to consider preemptive bans.