Media & Culture

New Yorker: OpenAI execs once discussed selling AI to Russia/China, rep says “existential safety” isn’t “a thing”

An 18-month probe exposes internal memos where execs asked 'what if we sold it to Putin?'

Deep Dive

A major 18-month investigation by The New Yorker's Ronan Farrow and Andrew Marantz, based on never-before-disclosed internal memos, over 200 pages of a co-founder's private notes, and interviews with more than 100 people, reveals that in OpenAI's early years, executives discussed leveraging world powers against each other in a bidding war for its AI technology. A company policy adviser explicitly asked, "what if we sold it to Putin?" This strategic consideration of selling to geopolitical rivals like Russia and China starkly contrasts with the organization's founding mission of ensuring artificial general intelligence (AGI) benefits all of humanity.

Following Sam Altman's controversial reinstatement as CEO in late 2023, the board hired the law firm WilmerHale—known for investigating Enron and WorldCom—to review the allegations against him. However, people involved state no written report was ever produced, with findings limited to oral briefings shared only with two new board members who were selected after close conversations with Altman himself. Furthermore, when reporters sought to interview researchers working on existential AI risks, a company representative responded dismissively, asking, "What do you mean by 'existential safety'? That's not, like, a thing." This comment highlights a potential internal cultural shift away from the organization's original safety-focused ethos.

Key Points
  • Executives discussed a bidding war strategy involving Russia and China, with a policy adviser asking "what if we sold it to Putin?"
  • The post-Altman-reinstatement review by WilmerHale produced no written report, with findings shared only orally with two new board members.
  • A company representative dismissed the concept of 'existential safety' to reporters, stating "That's not, like, a thing."

Why It Matters

It reveals a profound disconnect between OpenAI's public safety commitments and its private strategic discussions, raising major governance and trust questions.