Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says
A federal judge says the DoD appears to be illegally punishing Anthropic for questioning military AI use.
In a significant legal hearing, U.S. District Judge Rita Lin sharply criticized the Department of Defense's actions against AI safety company Anthropic. Judge Lin stated the Pentagon's decision to designate Anthropic a 'supply-chain risk'—a powerful label usually applied to foreign adversaries or terrorists—'looks like an attempt to cripple Anthropic' and appears to be illegal retaliation. The dispute stems from Anthropic's efforts to impose limitations on how its Claude AI models could be used by the military, which the company alleges prompted the Trump administration to slap it with the damaging security designation. Anthropic has filed two federal lawsuits and is now seeking a temporary injunction to pause the label, hoping to reassure skittish customers.
During the hearing, Judge Lin questioned why the punitive designation was used instead of simply canceling contracts, noting the measures 'don't seem to be tailored to stated national security concerns.' The DoD, now calling itself the Department of War (DoW), argued it followed procedure and claimed Anthropic could manipulate its AI software. However, DoD attorney Eric Hamilton acknowledged that Defense Secretary Pete Hegseth's public post barring all contractors from any business with Anthropic exceeded legal authority. The Pentagon is actively working to replace Anthropic's technology with alternatives from Google, OpenAI, and xAI over the coming months. A ruling on the injunction is expected within days, with a separate appeals court decision also pending.
- Judge Rita Lin stated the Pentagon's 'supply-chain risk' designation of Anthropic appears to be illegal First Amendment retaliation for the company questioning military AI use.
- The designation, a powerful tool typically used against foreign adversaries, has triggered customer concerns, leading Anthropic to seek a court-ordered pause via an injunction.
- The Department of Defense is replacing Anthropic's AI tools with alternatives from Google, OpenAI, and xAI, escalating a major conflict between Silicon Valley and the government over AI ethics and deployment.
Why It Matters
This case sets a critical precedent for how AI companies can negotiate ethical boundaries with the government without facing punitive retaliation.