The Pentagon’s culture war tactic against Anthropic has backfired
A federal judge halted the government's attempt to label Anthropic a 'saboteur' over a contract dispute.
A federal judge has delivered a significant blow to the Pentagon's campaign against AI company Anthropic. Last Thursday, Judge Rita Lin issued a temporary restraining order blocking the Department of Defense from labeling Anthropic a supply chain risk—a designation that would effectively brand the company a 'saboteur'—and from enforcing orders for government agencies to stop using its Claude AI models. The 43-page opinion reveals a dispute that escalated from a contract disagreement into what the judge suggested was an attempt to wage a 'culture war,' fueled by inflammatory social media posts from officials including former President Trump and Defense Secretary Pete Hegseth.
The core of the judge's ruling centers on procedural failures and a lack of evidence. The government used Claude AI for much of 2025 without complaint via contractor Palantir, with specific usage policies prohibiting mass surveillance and lethal autonomous warfare. Disagreements only arose during direct contract negotiations. The judge found that Secretary Hegseth failed to complete the required steps for a supply chain risk designation, and government lawyers later admitted they had no evidence for claims that Anthropic could implement a 'kill switch' against the government. The aggressive public statements led the judge to conclude Anthropic had a strong case that its First Amendment rights were violated, as the government appeared intent on punishing the company for its 'ideology' and 'rhetoric.'
The ruling is a major setback for the administration's approach. Dean Ball, a former Trump administration AI policy official who supported Anthropic in the case, called it 'a devastating ruling for the government.' While the Pentagon has seven days to appeal, Anthropic has a separate, similar case pending in Washington D.C. The legal battle highlights the high-stakes tension between government contracting, national security rhetoric, and the operational reality of adopting powerful, third-party AI systems.
- Judge Rita Lin issued a temporary restraining order against the Pentagon's 'supply chain risk' label for Anthropic, finding insufficient evidence.
- The government's case was undermined by officials' social media posts, with lawyers admitting claims of a potential 'kill switch' had no evidence.
- The ruling suggests the dispute escalated from a contract negotiation into an unconstitutional attempt to punish Anthropic for its 'ideology.'
Why It Matters
Sets a critical precedent for how governments can regulate and restrict AI companies, balancing national security with free speech and due process.