AI Safety

ControlAI 2025 Impact Report

Non-profit's direct lobbying secured 110+ UK lawmakers recognizing AI as a national security threat.

Deep Dive

ControlAI, a non-profit focused on mitigating the existential risks of superintelligence, has released its 2025 Impact Report detailing a year of aggressive policy engagement. The organization, which operates on a model of direct, uncompromising briefings to lawmakers, has scaled rapidly. Starting with cold outreach, they have now briefed 279 legislators and over 90 US congressional offices. Their most significant success is in the UK, where they built a coalition of 110+ lawmakers who now formally recognize superintelligence as a national security threat, a direct result of their advocacy.

This foundational awareness work has already translated into concrete political action. ControlAI's efforts led to two dedicated debates on superintelligence and extinction risk in the UK House of Lords. Furthermore, they catalyzed a series of hearings in the Canadian Parliament, featuring testimonies from AI safety experts like Connor Leahy, Max Tegmark, and the CEOs of major AI companies. The report, covering progress from December 2024 to January 2026, shows recent acceleration, with briefings in Canada and Germany scaling from ~50 to 100+ lawmakers in just two months, despite minimal local staff.

Moving forward, ControlAI plans to expand its direct lobbying model from raising awareness to driving specific policy actions. Their stated goal is to establish a presence in all G7 countries and significantly accelerate work in the United States, systematically pushing for national and international measures to prevent the development of superintelligence they deem an existential threat.

Key Points
  • Briefed 279 lawmakers globally, including 90+ US congressional offices, in just over a year.
  • Built a coalition of 110+ UK lawmakers declaring superintelligence a national security threat, leading to two House of Lords debates.
  • Catalyzed Canadian parliamentary hearings with top AI experts and plans to expand the direct lobbying model to all G7 nations.

Why It Matters

Shows how focused advocacy is directly shaping high-level government discourse and policy on frontier AI risks.