AI Safety

Bridging the Gap on AI Safety Policy

Forecast-led policy advice aims to cut through government red tape.

Deep Dive

The Swift Centre for Applied Forecasting, in February, launched a competition to bridge the gap between abstract AI safety research and government decision-making. The initiative provided forecasts across 5 AI scenarios, from agentic capabilities to workforce impact and autonomous weapons, and a 4-page policy template. Participants submitted 29 policy entries, which were judged by experts in energy, national security, military, and AI policy from the UK and US. Winning submissions, such as those on managing India's semiconductor market exposure and UK defence doctrine on autonomous weapons, will be forwarded to relevant decision-makers.

This project addresses systematic gaps in AI safety policy, where most work functions as literature reviews of technical risks that fail to influence policymakers. By targeting the highest-value parts of the decision-making chain—specifically stages where officials write options and recommendations—the competition aims to bypass miscommunication and delays. The approach provides tangible, finished products that align with what government officials need, making it more likely to have real-world impact.

Key Points
  • Competition used 5 AI forecasts from Swift Centre, producing 29 policy submissions
  • Judging panel included experts from UK and US in energy, national security, and AI policy
  • Goal: bypass bureaucratic stages by delivering decision-ready policy advice directly to leaders

Why It Matters

Moves AI safety from abstract research to actionable policy, directly influencing government decisions.