AI Safety

Summer AI Safety Opportunities at UChicago XLab

The 10-week program provides $4K in compute credits and aims to tackle overlooked existential risks.

Deep Dive

The University of Chicago's Existential Risk Laboratory (XLab) is launching its 2026 Summer Research Fellowship, a competitive 10-week program designed to accelerate early-career talent in the high-stakes fields of AI safety and nuclear security. The fellowship offers substantial support: a $10,000 stipend, on-campus housing, a meal plan, and a critical $4,000 allocation for compute resources and API credits for technical projects. The program runs from June 15 to August 22 and is explicitly not a research assistantship; instead, it empowers fellows to scope their own research questions, develop methodologies, and produce significant written outputs like journal articles or white papers.

XLab identifies a unique opportunity in the current landscape, noting that AI safety is a young field where foundational problems are still being defined, and the literature is shallow enough for dedicated researchers to reach the frontier quickly. The confluence of AI and nuclear security is described as "arguably even more neglected," with few researchers possessing fluency in both domains. The fellowship aims to build a cohort of 15-20 peers, supported by expert mentorship and workshops on forecasting and open-source intelligence, to tackle these overlooked existential risks. Applications are due March 15, with an emphasis on candidates committed to long-term careers in these fields.

Key Points
  • Fellows receive a $10,000 stipend plus $4,000 in compute/API credits for technical AI safety projects.
  • The 10-week, in-person program focuses on self-directed research in AI safety, nuclear security, or their intersection.
  • Applications are due March 15, 2026, for the cohort running from June 15 to August 22 at UChicago.

Why It Matters

This program directly funds and trains the next generation of researchers focused on mitigating catastrophic and existential risks from advanced AI.