AI Safety

LASR Labs Summer 2026 applications are open!

London AI safety program boasts 90% alumni placement at OpenAI, UK AISI, and top AI labs.

Deep Dive

LASR Labs has opened applications for its Summer 2026 AI safety research program, a 13-week intensive fellowship based in London. The program, run by the London Initiative for Safe AI (LISA), focuses on concrete threat models and action-relevant research to mitigate existential risks from advanced AI. Successful applicants receive an £11,000 stipend, workspace, and food/travel support while working full-time from July 20 to October 16. The initiative has a proven track record, with 90% of its alumni moving into technical AI safety roles at organizations including the UK AI Safety Institute (AISI), Apollo Research, and OpenAI's dangerous capabilities evaluations team.

The program begins with a 'Week 0' dedicated to research prioritization, where participants evaluate potential projects before being matched into teams. For the remaining 12 weeks, teams work with a dedicated supervisor to write and submit an academic-style paper, with past cohorts achieving a 50% acceptance rate at top conferences like NeurIPS. LASR provides comprehensive support including workshops on automated research workflows, talks from leading researchers, and career coaching. The program seeks applicants with strong technical ML engineering skills, research ability under uncertainty, and clear communication, aiming to bridge the gap between academic training and high-impact safety research careers.

Key Points
  • Offers a £11,000 stipend for a 13-week, full-time research program in London focused on mitigating AI existential risk.
  • Boasts a 90% alumni placement rate in AI safety roles at top labs like OpenAI, UK AISI, and Apollo Research.
  • 50% of papers from the Spring 2025 cohort were accepted to NeurIPS, with one receiving an oral presentation.

Why It Matters

Directly trains the next generation of technical researchers to address critical AI safety challenges at frontier labs.