Models & Releases

Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards

Defense Secretary Hegseth threatens to invoke Defense Production Act if Anthropic refuses to comply.

Deep Dive

A new exclusive report from Axios reveals a dramatic standoff between the Pentagon and AI safety leader Anthropic. Defense Secretary Pete Hegseth has issued a Friday deadline for the company to strip the safety guardrails from its Claude AI model, which is currently the only AI used in highly classified U.S. military systems. The Pentagon is demanding this change to enable domestic surveillance programs and the development of autonomous weapons systems—uses that directly violate Anthropic's core constitutional terms of service. This ultimatum forces CEO Dario Amodei to choose between his company's founding safety principles and its primary government customer.

The Pentagon's threat is severe: if Anthropic refuses, the Department of Defense will invoke the Defense Production Act to force compliance or officially designate the company as a supply chain risk. The latter action would effectively blacklist Anthropic from all future government contracts, a potentially crippling blow. This confrontation highlights the escalating tension between the U.S. military's desire for unrestricted, powerful AI tools and the AI industry's growing movement toward implementing ethical safeguards. The outcome will set a critical precedent for whether commercial AI safety policies can withstand government pressure for military and surveillance applications.

Key Points
  • Defense Secretary Hegseth gave Anthropic a Friday deadline to remove Claude's safety guardrails.
  • The Pentagon wants access for domestic surveillance and autonomous weapons, violating Anthropic's terms.
  • Non-compliance risks the Defense Production Act or being blacklisted as a supply chain risk.

Why It Matters

Forces a landmark choice between commercial AI ethics and national security demands, setting a critical precedent.