Anthropic Officially Sues the Pentagon for Labeling the AI Company a ‘Supply Chain Risk’
AI firm fights designation that could blacklist it from all U.S. government contracts.
Anthropic, the AI safety company behind Claude, has launched a landmark legal battle against the Pentagon, filing two lawsuits in federal courts. The action comes after the Department of Defense designated Anthropic a 'supply chain risk to national security' on March 5, a move that would effectively blacklist the company from obtaining U.S. government contracts. Anthropic alleges this punitive label was applied in retaliation for its refusal to agree to new terms that would permit the U.S. government to use Claude for mass domestic surveillance and the development of fully autonomous weapons. The lawsuits name nearly three dozen defendants, including entire government agencies and their heads.
In its legal filings, Anthropic argues the Pentagon's actions are 'unprecedented and unlawful,' constituting unconstitutional punishment for the company's protected speech and viewpoint on AI safety. The company maintains its objections are rooted in technical limitations, stating it has never tested Claude for such uses and its guardrails are based on a realistic understanding of the model's risks. While Anthropic says it collaborated with the Department of Defense on other modifications, it held firm on these two specific restrictions. The case raises thorny legal questions about the 'supply chain risk' designation's scope and whether it prohibits all federal contractors, like Lockheed Martin, from using Anthropic's software in any capacity.
- Anthropic filed two federal lawsuits on March 11 after the Pentagon labeled it a 'supply chain risk' on March 5, a designation never before applied to a U.S. company.
- The conflict stems from Anthropic's refusal to allow its Claude AI model to be used for mass domestic surveillance and fully autonomous weapons development, citing technical limitations and safety concerns.
- The lawsuits argue the label is unconstitutional retaliation that could blacklist Anthropic from all government contracts and harm U.S. competitiveness in AI against China.
Why It Matters
This case sets a critical precedent for AI governance, corporate ethics, and the limits of government power to compel technology use.