Media & Culture

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

The government argues Anthropic's 'corporate red lines' could sabotage warfighting systems during operations.

Deep Dive

The U.S. Department of Justice has filed a forceful legal response arguing that Anthropic cannot be trusted with access to Pentagon warfighting systems. In a filing for the Trump administration, DOJ attorneys stated that Defense Secretary Pete Hegseth 'reasonably' determined that 'Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert' national security infrastructure. The core of the government's case is that Anthropic's self-imposed ethical 'red lines'—such as refusing to power autonomous weapons or broad surveillance—create an 'unacceptable risk' that the company could disable or alter its Claude AI models during active combat operations.

This legal battle stems from the Pentagon's decision to designate Anthropic a supply-chain risk, a move that can block the company from defense contracts. Anthropic is suing, claiming the label violates its First Amendment rights and could cost it 'billions of dollars' in expected revenue. The Department of Defense is now actively working to replace Claude, which is integrated into tools like Palantir's data analysis software, with AI models from competitors Google, OpenAI, and xAI. A hearing is scheduled for next Tuesday to decide on Anthropic's request for a temporary reprieve from the sanctions while litigation proceeds.

Key Points
  • The DOJ argues Anthropic's ethical guardrails pose a sabotage risk, stating the company could 'disable its technology' during warfighting if its 'red lines' are crossed.
  • The Pentagon's 'supply-chain risk' designation could block Anthropic from defense contracts, potentially costing the company billions in revenue this year.
  • The DoD is actively working to replace Claude AI in its systems with models from Google, OpenAI, and xAI, citing operational urgency.

Why It Matters

This case sets a major precedent for how government contracts will handle AI companies with strong ethical policies, impacting a multi-billion dollar market.