Startups & Funding

OpenAI and Google employees rush to Anthropic’s defense in DOD lawsuit

AI giants' staff defend Anthropic after Pentagon labels it a 'supply-chain risk' for refusing surveillance tech.

Deep Dive

In a significant show of industry solidarity, more than 30 employees from leading AI labs OpenAI and Google DeepMind have filed a legal brief supporting Anthropic's lawsuit against the U.S. Department of Defense. The dispute centers on the Pentagon's decision last week to designate Anthropic a "supply-chain risk"—a label typically reserved for foreign adversaries. This action came after Anthropic refused to allow the DOD to use its Claude AI models for mass surveillance of Americans or for autonomously firing weapons. The DOD argued it should be able to use AI for any "lawful" purpose without contractor constraints.

The employee brief, signed by prominent figures like Google DeepMind's chief scientist Jeff Dean, contends the government's move was an "improper and arbitrary use of power." It warns that punishing a leading U.S. AI company for upholding ethical guardrails will damage industrial competitiveness and stifle open discussion about AI risks. The filing highlights that the DOD could have simply canceled its contract with Anthropic and sought another provider, which it promptly did by signing a deal with OpenAI—a move protested by many of OpenAI's own staff. The employees argue that in the absence of comprehensive public law, the contractual and technical restrictions imposed by AI developers are a critical safeguard against catastrophic misuse, making this a pivotal case for the future of responsible AI governance.

Key Points
  • 30+ OpenAI & Google DeepMind employees, including Jeff Dean, filed a legal brief supporting Anthropic's lawsuit.
  • The Pentagon labeled Anthropic a 'supply-chain risk' after it refused AI use for mass surveillance and autonomous weapons.
  • The DOD signed a new deal with OpenAI immediately after the designation, sparking internal protest.

Why It Matters

This case sets a precedent for whether private AI companies can legally enforce ethical use restrictions on government clients.