Media & Culture

Anthropic makes last-ditch effort to salvage deal with Pentagon after blowup

CEO Dario Amodei returns to talks after public feud, memo blames lack of Trump donations for collapse.

Deep Dive

Anthropic is in emergency talks with the Pentagon to salvage its defense contract after a bitter public feud that saw the Department of Defense threaten to label the AI startup a national security 'supply chain risk.' The collapse centers on Anthropic's refusal to grant the military unrestricted access to its Claude models for any 'lawful use,' with CEO Dario Amodei holding firm on two red lines: no mass surveillance of Americans and no lethal autonomous weapons. A newly leaked memo from Amodei to staff, reported by The Information and Financial Times, adds a political dimension, suggesting the relationship soured because Anthropic "haven't donated to Trump" or "given dictator-style praise to Trump," unlike rivals OpenAI, whose executives are Trump mega-donors.

The technical and contractual impasse has major consequences. Until last week, Claude was the only AI system with clearance to handle classified information and was reportedly used in operations like the US raid on Venezuela. The 'supply chain risk' designation would force other US tech firms to sever ties with Anthropic to keep their own defense contracts, potentially crippling the company. While Under-Secretary Emil Michael has publicly attacked Amodei, calling him a 'liar,' negotiations continue for a new contract that would allow military use of Claude under stricter terms. The outcome will set a critical precedent for how AI companies engage with the US government on ethically fraught military applications.

Key Points
  • Pentagon threatened to designate Anthropic a 'supply chain risk' after it refused terms for unrestricted AI use, a category that could freeze it out of the defense ecosystem.
  • CEO Dario Amodei's leaked memo blames the collapse on political factors, stating "we haven't donated to Trump," unlike OpenAI executives who are Trump mega-donors.
  • The core dispute is over two ethical red lines: Anthropic will not allow its Claude AI to be used for mass surveillance of Americans or for lethal autonomous weapons systems.

Why It Matters

The outcome sets a precedent for AI ethics in military contracts and determines if firms can refuse certain uses without being blacklisted.