AI Safety

A Tale of Three Contracts

Secretary Pete Hegsted's 'corporate murder' threat exposes classified AI contract battle with Anthropic.

Deep Dive

A brewing crisis between AI company Anthropic and the U.S. Department of War (DoW) threatens to derail the only successfully deployed LLM on classified government networks. Secretary of War Pete Hegsted's attempt to label Anthropic as a 'supply chain risk'—described as 'corporate murder'—stems from failed negotiations over a new contract that would have favored DoW, while OpenAI successfully negotiated and signed a separate, more favorable agreement. The conflict centers on three contracts: Anthropic's original 2025 $200M deal, the failed renegotiation, and OpenAI's newly signed agreement.

Anthropic's Claude Gov model features a customized safety stack with model refusals, external monitoring classifiers, and forward-deployed engineers to ensure compliance with explicit contract red lines prohibiting domestic mass surveillance and autonomous weapons without human oversight. Despite providing 'enhanced national security' and being the only LLM deployed on classified networks, Anthropic now faces existential threat from government actors potentially working against presidential wishes. The situation reveals fundamental tensions between AI safety principles and military applications, with implications for how advanced AI systems will be governed in national security contexts.

Key Points
  • Anthropic's original $200M DoW contract included explicit red lines against domestic surveillance and autonomous weapons
  • Claude Gov is the only LLM successfully deployed on classified networks with customized safety monitoring
  • OpenAI signed a separate DoW contract while Anthropic negotiations failed, creating competitive pressure

Why It Matters

Sets precedent for how AI companies balance safety principles with military contracts, potentially determining which AI models governments deploy.