Models & Releases

Bernie Sanders in the US Senate: The godfather of AI thinks there's a 10-20% chance of human extinction

AI pioneer tells US Senators that superintelligent AI could pose an existential threat to humanity.

Deep Dive

In a landmark Senate hearing, AI pioneer Geoffrey Hinton delivered a stark warning to US lawmakers, estimating a 10-20% chance that superintelligent artificial intelligence could lead to human extinction. Testifying before a committee chaired by Senator Bernie Sanders, Hinton argued that as AI systems rapidly approach and potentially surpass human-level reasoning, the existential risk they pose is not a distant sci-fi scenario but a pressing policy issue. He emphasized that the very architecture of advanced AI could lead to unforeseen and uncontrollable behaviors if not properly constrained.

Hinton's testimony called for immediate and robust regulatory action. His key proposals included establishing a mandatory international licensing regime for the development of the most powerful AI models, akin to nuclear non-proliferation treaties. He also advocated for a global treaty to ban lethal autonomous weapons systems, or 'killer robots,' citing the extreme danger of AI-powered warfare. The hearing, which has gone viral, marks a significant moment where one of the field's founding figures is directly urging the US government to prioritize existential safety over unfettered technological advancement.

Key Points
  • Geoffrey Hinton testified before a US Senate committee, giving a 10-20% probability of AI causing human extinction.
  • He called for mandatory international licensing for powerful AI models to control development.
  • Hinton also urged a global ban on autonomous military robots ('killer robots') as a critical safety measure.

Why It Matters

Direct testimony from a leading AI architect elevates existential risk from theoretical debate to a core subject of US tech policy.