Preventing extinction from ASI on a $50M yearly budget
A new non-profit claims a $50M annual budget could secure an international prohibition on ASI development.
ControlAI, a newly publicized non-profit organization, has announced its mission to avert human extinction risks posed by Artificial Superintelligence (ASI). The group's primary strategy is not to build technical safeguards, but to campaign for an international prohibition on the development of superintelligent AI systems. They argue that achieving this political outcome requires mobilizing a "sufficiently motivated, sufficiently powerful initial coalition of countries" to lead a global ban. ControlAI estimates that scaling their operations to a $50 million annual budget would give them a concrete chance of securing this prohibition within the next few years, with additional funding up to $500 million continuing to significantly improve their odds.
The organization's theory of change focuses on influencing the executive branches of governments, which are responsible for international security negotiations. They plan to shape the "prevailing social currents" that guide policymakers by working through both informal channels—like media narratives and advisor conversations—and formal democratic channels, including legislative pressure and public opinion. Their workstreams aim to make pursuing an ASI ban a top national priority by creating pervasive awareness and demand. This approach positions ControlAI as a unique actor in the AI safety landscape, focusing squarely on macro-political strategy rather than technical AI alignment research.
- ControlAI's goal is an international prohibition on ASI development, estimating a $50M/year budget is needed for a serious chance of success.
- Their strategy focuses on influencing government executives by shaping public opinion, media cycles, and legislative pressure across multiple countries.
- The group argues that preventing ASI extinction risk is a political challenge requiring a coalition of nations, not just technical research.
Why It Matters
This marks a major shift in AI risk strategy from technical alignment to global political campaigning, aiming to preemptively ban the technology.