Anthropic Supply-Chain-Risk Designation Halted by Judge
Federal judge blocks DoD's labeling, allowing customers to potentially resume using Claude AI.
Anthropic has won a significant legal victory as federal district judge Rita Lin granted a preliminary injunction against the US Department of Defense. The ruling bars the Pentagon from labeling the AI company a 'supply-chain risk,' a designation that had effectively halted the use of Anthropic's Claude AI tools across federal agencies. Judge Lin found the designation "likely both contrary to law and arbitrary and capricious," noting the government provided no legitimate basis to infer sabotage risk from Anthropic's usage restrictions.
The injunction 'restores the status quo' to February 27th, before the directives were issued, but does not force the DoD to use Claude. It prevents the government from citing the supply-chain-risk designation as a basis for action, though agencies may still cancel deals through other lawful means. The immediate impact is unclear as the order won't take effect for a week, and a separate lawsuit in a DC appeals court remains pending. However, this ruling provides Anthropic with a symbolic boost to its reputation and business as it challenges sanctions it calls unconstitutional.
- Judge Rita Lin granted Anthropic a preliminary injunction, blocking the DoD's 'supply-chain risk' designation.
- The ruling restores the legal status quo to February 27th, allowing customers to potentially resume using Claude AI.
- The DoD had been using Claude for writing sensitive documents and analyzing classified data before recent sanctions.
Why It Matters
This legal win helps Anthropic protect its government contracts and reputation, challenging regulatory overreach in AI procurement.