Media & Culture

Anthropic Hits Back After US Military Labels It a 'Supply Chain Risk'

Defense Secretary's order could bar military contractors from using Claude AI, sparking legal threats.

Deep Dive

The U.S. Department of Defense has designated AI company Anthropic as a "supply chain risk," a move that could prohibit military contractors and partners from using its Claude AI models. The designation followed weeks of tense negotiations where Anthropic refused to allow its technology to be used for "all lawful uses" without specific prohibitions against mass domestic surveillance and fully autonomous weapons. Defense Secretary Pete Hegseth announced the immediate restriction via social media, stating no entity doing business with the military may conduct commercial activity with Anthropic. The company responded swiftly, vowing to challenge the designation in court and calling it a "dangerous precedent" for any American company negotiating with the government.

Anthropic argues the designation under statute 10 USC 3252 only applies to direct DoD suppliers, not contractors using Claude for other customers, and claims Hegseth lacks the statutory authority for a blanket ban. Legal experts say the announcement's full implications are unclear, leaving major tech partners like Amazon, Microsoft, Google, and Nvidia in limbo. Meanwhile, OpenAI CEO Sam Altman announced a separate agreement with the DoD to deploy its models in classified environments, explicitly incorporating prohibitions on mass surveillance and autonomous weapons. The dramatic split highlights a growing rift between AI developers and government over ethical guardrails for military applications, with industry leaders warning the move could damage U.S. competitiveness.

Key Points
  • The Pentagon designation stems from Anthropic's refusal to allow unrestricted "all lawful" military use of Claude AI, specifically opposing mass surveillance and autonomous weapons.
  • Anthropic is preparing a legal challenge, arguing the order exceeds statutory authority and only applies to direct DoD contracts, not contractor use for other customers.
  • OpenAI secured a contrasting DoD agreement with explicit safety carveouts, highlighting divergent approaches to government partnerships in the AI industry.

Why It Matters

This precedent could force AI companies to choose between ethical principles and government contracts, reshaping the defense-tech landscape.