AI Safety

Anthropic vs. DoW Preliminary Injunction Ruling

Federal judge halts sweeping government ban on Anthropic, citing evidence of retaliation for AI safety stance.

Deep Dive

A federal judge has issued a preliminary injunction against the U.S. Department of War (DoW), blocking a series of extraordinary sanctions imposed on AI company Anthropic. The sanctions, triggered by Anthropic's public refusal to allow its Claude AI model to be used in autonomous lethal weapons or mass surveillance, included a government-wide contract ban, a mandate for defense contractors to cut all commercial ties with Anthropic, and the unprecedented designation of the company as a 'supply chain risk'—a label typically reserved for foreign adversaries.

In a strongly worded order, the court found the government's actions appeared designed to punish Anthropic for its public criticism, rather than to address legitimate national security concerns. The judge noted the DoW could have simply stopped using Claude, but instead enacted measures an amicus brief described as 'attempted corporate murder.' The ruling states the evidence supports an inference of retaliation that would 'cripple' Anthropic, justifying the injunction to prevent irreparable harm while the case proceeds.

Key Points
  • Judge halts government-wide contract ban on Anthropic, preventing 'corporate murder' of the AI firm.
  • Ruling blocks DoW's order forcing defense contractors to sever all commercial ties with Anthropic.
  • Court finds 'supply chain risk' designation—first for a domestic company—was likely retaliatory, not security-based.

Why It Matters

Sets a critical precedent for AI companies' rights to enforce ethical use policies against government clients.