Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards
Pentagon demands unfettered access to Claude AI by Friday or will invoke Defense Production Act.
In a dramatic escalation of tensions over AI ethics, Defense Secretary Pete Hegseth has issued an ultimatum to Anthropic CEO Dario Amodei, demanding the company remove safeguards from its Claude AI model for military use by Friday evening. According to an Axios exclusive, Hegseth threatened to either sever Pentagon contracts and declare Anthropic a 'supply chain risk' or invoke the Defense Production Act—a Cold War-era law rarely used in such adversarial ways—to compel the company to tailor Claude to the military's operational needs without restrictions. The confrontation stems from Anthropic's refusal to allow its industry-leading model to be used for mass surveillance of Americans or development of fully autonomous weapons, despite Claude being the only AI currently integrated into the Pentagon's most sensitive classified systems.
The Pentagon finds itself in a bind: officials acknowledge they 'need them now' because Claude's capabilities are unmatched, yet Hegseth insists no company can dictate operational terms. If ties are cut, the military would need an immediate replacement for Claude in classified workflows, forcing other defense contractors to certify they aren't using the model. Anthropic maintains a conciliatory public stance, emphasizing 'good-faith conversations' about supporting national security within responsible boundaries, while privately denying Pentagon claims that it raised concerns about Claude's use during specific operations like the Maduro raid. This standoff represents the most significant public clash between AI safety principles and government demands, setting a precedent for how foundational model companies navigate defense contracts while upholding their ethical frameworks.
- Defense Secretary Hegseth set a Friday deadline for Anthropic to remove Claude AI safeguards or face penalties under the Defense Production Act.
- The Pentagon's threat includes cutting all contracts and declaring Anthropic a 'supply chain risk,' which would cascade to other defense contractors.
- Anthropic refuses to allow Claude to be used for mass surveillance or autonomous weapons, creating an impasse despite the model's critical role in classified systems.
Why It Matters
This clash sets a precedent for whether AI companies can enforce ethical guardrails when their models become essential to national security infrastructure.