AI Safety

What is the Iliad Intensive?

A new in-person bootcamp focuses on foundational AI safety research with a $5,000 stipend for participants.

Deep Dive

The AI alignment organization Iliad has launched the Iliad Intensive, a rigorous, four-week educational program designed to train researchers in the mathematical foundations of AI safety. Running five days a week in London, the program offers a deep dive into 20 modules across five core clusters: Alignment, Learning, Interpretability, Agency, and Safety Guarantees. Unlike similar engineering-focused bootcamps, the Intensive prioritizes theoretical understanding, with content covering topics like Singular Learning Theory, Mechanistic Interpretability, and Agent Foundations. Participants are selected based on mathematical expertise—typically from backgrounds in math, physics, or theoretical computer science—and receive a $5,000 stipend to cover travel and living expenses, along with provided office space and meals.

The program structure is highly immersive, featuring about 6.5 hours of daily learning through internal lectures, expert guest talks, paper reading sessions, and collaborative math and coding exercises. Iliad, which also runs a conference series and incubates research projects, has assembled a team of around 15 domain experts to create the curriculum. With plans to potentially expand to the Bay Area, the Intensive represents a significant investment in field-building for AI alignment, aiming to equip researchers with the tools to tackle long-term safety challenges. The next cohort runs from June 6 to July 3, with applications highlighting the growing institutional effort to cultivate specialized talent in this critical field.

Key Points
  • 4-week, in-person program in London with a $5,000 stipend for travel and housing
  • Curriculum of 20 modules across 5 clusters, focusing on math-heavy AI alignment theory over coding
  • Selects for participants with strong mathematical backgrounds (e.g., degrees in math, physics, or theoretical CS)

Why It Matters

It systematically builds high-level talent for AI safety, a critical bottleneck for managing advanced AI risks.