The Spectre haunting the "AI Safety" Community
Founder Gabriel Alfour argues persuasion is easy—lawmakers just need to hear about AGI extinction risks.
In a viral post on LessWrong, ControlAI founder Gabriel Alfour outlines the strategy behind his Direct Institutional Plan (DIP), designed to address extinction risks from superintelligent AI (ASI). He criticizes mainstream AI policy organizations for overemphasizing 'persuasion' and respectability politics, arguing the real bottleneck is simply informing lawmakers about AGI (Artificial General Intelligence) and ASI risks. Alfour claims success with a focused pipeline: Attention (via ads/emails), Information (briefings), and Action. In just over a year, ControlAI has briefed 150+ UK lawmakers, securing support from 112 for binding regulation on superintelligence. He contends that once politicians hear the facts—citing evidence like the Center for AI Safety's statement—concern and support for action follow naturally, bypassing lengthy persuasion efforts.
- ControlAI's DIP has briefed 150+ UK lawmakers, with 112 supporting binding superintelligence regulation.
- Founder argues the bottleneck is Attention/Information, not Persuasion, as officials haven't heard of AGI/ASI extinction risks.
- Critiques AI policy orgs for focusing on social respectability over directly informing policymakers with concrete evidence.
Why It Matters
Shifts AI safety advocacy from theoretical debate to direct political engagement, showing regulatory traction is possible.