AI Safety

Prominent Scientists, Faith Leaders, Policymakers and Artists Call for a Prohibition on Superintelligence, as Poll Shows Americans Don’t Want It

A politically diverse coalition, including AI pioneers and celebrities, demands a halt to superintelligence research.

Deep Dive

The Future of Life Institute (FLI) has mobilized a remarkably broad and politically diverse coalition to demand a halt to the development of superintelligence—AI that outperforms humans in most cognitive tasks. Signatories include AI pioneers Yoshua Bengio and Geoffrey Hinton, former national security officials like Susan Rice, business figures Steve Wozniak and Richard Branson, and cultural icons from Stephen Fry to Prince Harry. The group's core argument is that the race toward superintelligence, which experts believe could arrive within a decade, is proceeding without proven methods for control or alignment, posing significant risks. They call for a prohibition until the technology is scientifically determined to be safe and has genuine public support.

This call to action is backed by a new national U.S. poll commissioned by FLI, which reveals a stark disconnect between corporate AI development and public sentiment. The data shows only 5% of Americans support the current status quo of unregulated AI advancement, while 73% want robust regulation. Crucially, 64% believe superintelligence should not be developed until there is scientific consensus on its safety and controllability. FLI President Max Tegmark summarized the findings, stating, '95% of Americans don’t want a race to superintelligence.' The initiative advocates for a pivot toward 'secure innovation' using controllable, narrow AI tools to solve problems in health, energy, and education, rather than pursuing the high-risk goal of artificial general intelligence.

Key Points
  • Coalition includes AI pioneers Yoshua Bengio & Geoffrey Hinton, national security figures, and celebrities like Stephen Fry, representing unprecedented political diversity.
  • A new U.S. poll shows 95% oppose the current unregulated AI race, with 64% supporting a ban on superintelligence until safety is proven.
  • The group defines superintelligence as AI outperforming humans in most cognitive tasks, which experts warn could arrive within 10 years without reliable control methods.

Why It Matters

This marks a major, organized push to shift AI development from a corporate race toward a publicly accountable, safety-first framework.