Media & Culture

Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo

Appeals court blocks removal of national security label, contradicting earlier ruling that found DoD acted in bad faith.

Deep Dive

Anthropic is embroiled in an unprecedented legal battle with the Pentagon over its designation as a 'supply-chain risk,' a label typically reserved for foreign national security threats. Conflicting court rulings have created limbo: a San Francisco judge ordered the designation removed last week, finding the Department of Defense likely acted in bad faith due to Anthropic's public criticism of military AI use limits. However, a Washington, DC appeals court just blocked that removal, stating that while Anthropic may suffer financial harm, they would not 'lightly override' military judgments on national security during an ongoing conflict.

The core dispute centers on Anthropic's insistence that its Claude AI lacks the accuracy for sensitive operations like autonomous drone strikes, which the company argues led to unlawful punishment. The designation bars the Pentagon and its contractors from using Claude in military projects. Acting Attorney General Todd Blanche called the DC ruling 'a resounding victory for military readiness,' asserting operational control belongs to the Commander-in-Chief, not a tech company. Experts say Anthropic has a strong case, but courts are often reluctant to overrule the White House on national security. Final decisions in the two parallel lawsuits are months away, with the next oral arguments scheduled for May 19.

Key Points
  • A DC appeals court ruled Anthropic must keep its Pentagon 'supply-chain risk' designation, conflicting with a San Francisco judge's order to remove it last week.
  • The San Francisco judge found the DoD likely acted in bad faith, motivated by frustration over Anthropic's public limits on military AI use and criticism.
  • The designation bars military use of Claude AI, with Anthropic claiming financial losses and experts warning it 'chills professional debate' on AI system performance.

Why It Matters

This case sets a precedent for executive power over tech firms and could deter AI companies from publicly advocating for safety limits in government contracts.