Models & Releases

New Yorker published a major investigation into Sam Altman and OpenAI today — based on never-before-disclosed internal memos and 100+ interviews

Based on 70+ pages of Ilya Sutskever's memos and 200+ pages of Dario Amodei's private notes.

Deep Dive

A major New Yorker investigation by Ronan Farrow, based on 18 months of reporting and over 100 interviews, reveals explosive internal allegations against Sam Altman and OpenAI. The report draws on previously undisclosed documents, including approximately 70 pages of memos compiled by co-founder and former chief scientist Ilya Sutskever and over 200 pages of private notes from former VP of research Dario Amodei. These documents formed the basis for the board's decision to fire Altman in November 2023, with Sutskever's memos alleging a "consistent pattern" of behavior, with the first listed item being "Lying." The report states that during a tense call after his firing, when pressed to acknowledge this pattern, Altman responded, "I can't change my personality," which a board member interpreted as an admission he would not stop.

The investigation also uncovers significant internal disputes over AI safety and ethics. The superalignment team, publicly promised 20% of the company's compute to research AI existential risks, allegedly received only 1-2% on the oldest hardware before being dissolved. In a telling exchange, when reporters asked to interview researchers working on "existential safety," a company representative replied, "What do you mean by 'existential safety'? That's not, like, a thing." Further, the piece reveals that in OpenAI's early years, executives discussed instigating a bidding war for its AI between world powers like China and Russia, a plan dropped only after employee threats to quit. It also alleges that while Altman publicly claimed solidarity with Anthropic for refusing Pentagon weapons work, OpenAI had been negotiating with the Defense Department for days and later announced a major military integration deal.

Key Points
  • Internal memos from Ilya Sutskever allege a "consistent pattern" of deception by Sam Altman, listing "Lying" as the first item, which contributed to his 2023 firing.
  • The superalignment team, tasked with AI existential risk, received only 1-2% of promised compute on old hardware and was dissolved, with a company rep questioning if "existential safety" is "a thing."
  • Early executives discussed selling AI to global powers like Russia, and while publicly supporting Anthropic's stance, OpenAI privately negotiated a major military deal with the Pentagon.

Why It Matters

This investigation raises critical questions about transparency, safety prioritization, and ethical governance at the world's leading AI company.