Startups & Funding

The Pentagon is developing alternatives to Anthropic, report says

The DoD is developing in-house LLMs after refusing Anthropic's clause banning autonomous weapons and mass surveillance.

Deep Dive

The U.S. Department of Defense is actively engineering its own suite of large language models (LLMs) for operational use, following the dramatic collapse of its $200 million contract with AI safety startup Anthropic. According to Bloomberg, the partnership disintegrated over fundamental ethical disagreements: Anthropic sought contractual guarantees that its AI would not be used for mass surveillance of Americans or to power autonomous weapons systems. The Pentagon refused these restrictions, leading to an impasse.

In response, the DoD has moved swiftly to secure alternative AI capabilities. It has signed new agreements with OpenAI and, notably, with Elon Musk's xAI to integrate the Grok model into classified systems. The Pentagon's chief digital and AI officer, Cameron Stanley, confirmed that engineering work on government-owned LLMs has begun and they will be available 'very soon.' This strategic pivot is underscored by the Pentagon's formal designation of Anthropic as a 'supply-chain risk'—a label typically applied to foreign adversaries—which legally prevents defense contractors from working with the company. Anthropic is now challenging this designation in court.

Key Points
  • The Pentagon's $200M contract with Anthropic failed due to a clause banning use for mass surveillance and autonomous weapons.
  • The DoD is now building its own LLMs and has new deals with OpenAI and xAI's Grok for classified work.
  • The Pentagon has labeled Anthropic a 'supply-chain risk,' blocking defense contractors from using its AI, a move Anthropic is contesting in court.

Why It Matters

This clash sets a major precedent for AI ethics in defense, forcing a choice between corporate safety principles and military operational demands.