OpenAI Safety Fellowship Applications Close Today, Seeking AI Safety Researchers
Applications close May 3 for a 6-month program on AI alignment.
Applications for the OpenAI Safety Fellowship close today, May 3, 2026. This new program targets external researchers, engineers, and practitioners to conduct rigorous research on the safety and alignment of advanced AI systems. The fellowship runs from September 2026 to February 2027, focusing on three core areas: safety evaluation, ethics, and scalable mitigations. Participants will work on developing methods to evaluate and mitigate risks from future powerful models, addressing gaps in current alignment techniques.
This initiative aims to broaden the pool of researchers actively contributing to AI safety, a field that struggles to keep pace with rapid model capabilities. By providing structured access to cutting-edge research and OpenAI's expertise, the fellowship intends to produce actionable findings that can inform both internal safety practices and the broader AI community. The deadline today underscores the urgency of building a robust safety research ecosystem before more advanced systems emerge.
- Applications close today, May 3, 2026.
- Fellowship runs from September 2026 to February 2027.
- Focuses on safety evaluation, ethics, and scalable mitigations.
Why It Matters
Expands AI safety research capacity ahead of more powerful systems, addressing a critical talent gap.