AI Safety

Questions raised about OpenAI leaders’ trustworthiness by the New Yorker

Internal 'countries plan' proposed playing China and Russia against each other for funding.

Deep Dive

A major New Yorker investigation has cast a harsh light on the internal decision-making and ethical priorities of OpenAI's leadership. The report centers on a controversial 2017 strategy, internally dubbed the 'countries plan,' which was reportedly championed by co-founder Greg Brockman. The core idea was for OpenAI to deliberately play world powers—specifically China and Russia—against one another to start a bidding war for access to its AI technology. According to former policy and ethics adviser Page Hedley, Brockman's rationale was, 'It worked for nuclear weapons, why not for A.I.?', with the explicit goal to 'set up, basically, a prisoner's dilemma, where all of the nations need to give us funding.'

Former employees described the proposal as 'completely fucking insane' and were aghast at the premise of potentially selling what they considered 'the most destructive technology ever invented' to adversarial regimes. The plan was discussed with at least one potential donor and only abandoned after several key employees threatened to quit. The report suggests CEO Sam Altman's primary motivation for dropping the scheme was to retain staff, not necessarily the geopolitical risk. This revelation directly challenges the company's public commitment to safe and beneficial AI, exposing a stark contrast between its stated mission and the profit-driven, win-at-all-costs mentality allegedly displayed by some executives in closed-door meetings.

Key Points
  • OpenAI's Greg Brockman proposed a 'countries plan' to spark a China-Russia bidding war for AI, likening it to nuclear proliferation.
  • The strategy aimed to create a funding 'prisoner's dilemma' and was only scrapped after employee revolt threatened mass resignations.
  • The exposé reveals a deep tension between OpenAI's public safety mission and internal profit-driven discussions about leveraging AI as a geopolitical weapon.

Why It Matters

For an industry built on trust, this calls into question the ethical guardrails at the world's most influential AI lab during critical early decisions.