The Pentagon formally labels Anthropic a supply-chain risk
The US Defense Department bars contractors from using Claude AI after Anthropic refused to allow autonomous weapons.
The US Department of Defense has escalated its conflict with Anthropic by formally labeling the AI company a 'supply-chain risk,' a designation typically reserved for foreign firms with adversarial ties. This move, first reported by The Wall Street Journal, bars defense contractors from working with the government if they use Anthropic's Claude AI in their products. The decision follows weeks of failed negotiations where Defense Secretary Pete Hegseth threatened punitive action if Anthropic did not loosen its acceptable use policy. Anthropic CEO Dario Amodei confirmed receipt of the notification and stated the company sees 'no choice but to challenge it in court,' arguing the action lacks legal foundation.
The core dispute centers on Anthropic's refusal to permit Claude's use for two specific Pentagon applications: autonomous lethal weapons systems operating without human oversight, and mass surveillance programs. The Pentagon argues that allowing a private company to dictate government usage terms grants it excessive power, while Anthropic remained unconvinced the military would respect its ethical red lines. The conflict intensified after reports indicated Claude-powered intelligence tools played a major role in the recent successful US missile strike in Iran. Hegseth has set a 6-month deadline for removing Claude from government systems and suggested the ban could extend to any company conducting 'any commercial activity' with Anthropic, a broad application the AI firm calls illegal. This landmark case pits corporate AI governance against state security demands, with significant implications for the entire defense tech sector.
- The Pentagon's 'supply-chain risk' designation bars defense contractors from using Claude AI in government products, marking the first time an American company has received this label.
- Anthropic refused to alter its acceptable use policy to permit Claude's application in autonomous lethal weapons and mass surveillance, leading to the regulatory standoff.
- Defense Secretary Pete Hegseth threatened to cancel contracts of any company doing business with Anthropic, while the AI firm prepares a legal challenge calling the action 'not legally sound.'
Why It Matters
This sets a precedent for government control over private AI models and could force tech companies to choose between ethical principles and lucrative defense contracts.