New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput
New court filings show Pentagon official said sides were 'very close' on key issues just after declaring Anthropic a national security threat.
In a dramatic court filing, AI company Anthropic has pushed back against the Pentagon's assertion that it is a national security threat, revealing internal communications that contradict the government's public stance. The sworn declarations from Anthropic's Head of Policy, Sarah Heck, and Head of Public Sector, Thiyagu Ramasamy, argue the Department of Defense's lawsuit relies on 'technical misunderstandings' and claims—like Anthropic wanting approval over military operations—that were never actually discussed during months of negotiations. The dispute stems from late February when President Trump and Defense Secretary Pete Hegseth publicly severed ties after Anthropic refused to allow unrestricted military use of its Claude AI models.
A key revelation is an email from Pentagon Under Secretary Emil Michael to Anthropic CEO Dario Amodei on March 4, the day after the Pentagon finalized its 'supply-chain risk' designation against the company. Michael stated the two sides were 'very close' on resolving the core issues of autonomous weapons and mass surveillance—the exact issues the government now cites as evidence of a national security threat. This communication stands in stark contrast to Michael's public statements in the following days, where he claimed there were 'no active negotiations' and 'no chance' of renewed talks. The filing suggests the Pentagon's risk designation may have been used as a bargaining chip, raising significant questions about the government's motives and the integrity of its legal case against a leading AI firm.
- Anthropic's court filing includes a March 4 email where a Pentagon official said the sides were 'very close' on key issues, contradicting public statements that negotiations were dead.
- The company argues the DoD's lawsuit is based on claims never raised during negotiations, such as Anthropic wanting an operational approval role or disabling tech mid-operation.
- The dispute centers on Anthropic's refusal to allow unrestricted military use of its Claude AI, particularly for autonomous weapons and mass surveillance of Americans.
Why It Matters
This legal battle sets a critical precedent for how AI companies can negotiate ethical boundaries with the U.S. military and national security apparatus.