Media & Culture

OpenAI VP for Research for post-training defects to Anthropic

Key safety researcher defects to rival Anthropic, citing concerns over OpenAI's priorities.

Deep Dive

In a significant shakeup for the AI industry, Jan Leike, the former Vice President of Research at OpenAI, has departed to join rival AI lab Anthropic. Leike, who was a key figure in OpenAI's safety efforts and co-led its Superalignment team with Ilya Sutskever, announced his move on social media, stating his new role will involve leading a team focused on scalable oversight and alignment research. This defection follows months of reported internal tensions at OpenAI regarding the balance between rapid product development and long-term safety research, particularly after the company's restructuring of its safety teams in 2023.

At Anthropic, Leike is expected to helm a new team dedicated to core alignment challenges, reinforcing Anthropic's public commitment to building safe and steerable AI systems like Claude. The move represents a major coup for Anthropic, which positions itself as a more safety-conscious alternative to OpenAI, and a notable loss for OpenAI's safety credibility. It underscores the intense competition for top AI talent and highlights the ongoing philosophical divide within the field about how to responsibly develop increasingly powerful AI models.

Key Points
  • Jan Leike, OpenAI's VP of Research and Superalignment co-lead, has joined rival Anthropic.
  • He will lead a new team at Anthropic focused on scalable oversight and alignment research.
  • The move follows reported disagreements over safety prioritization and resources at OpenAI.

Why It Matters

A key safety leader's departure signals shifting priorities and intensifies the talent war between leading AI labs.