AI Safety Newsletter #69: Department of War, Anthropic, and National Security
The Pentagon designates Anthropic a 'supply chain risk' after the company refused to allow autonomous weapons development.
In a major escalation between the US government and a leading AI lab, the Department of War (DoW) has officially designated Anthropic a 'supply chain risk' to national security. This unprecedented move, announced on March 5th, prohibits the use of Anthropic's Claude AI models by the DoW or in any defense contracts. The conflict stems from Anthropic's refusal during contract negotiations to lift its core safety restrictions, specifically its bans on developing fully autonomous weapons and enabling domestic mass surveillance. Secretary of War Pete Hegseth justified the designation by expressing concerns that the 'loyalties of Anthropic AIs could be subverted,' posing a sabotage risk. Anthropic is now challenging this designation in court, with legal analysts questioning the use of a tool intended for foreign adversaries in a domestic contract dispute.
Simultaneously, Anthropic has significantly weakened its own safety commitments. The company released Version 3.0 of its Responsible Scaling Policy (RSP) in late February, which removed the foundational pledge to never release a 'catastrophically harmful' AI. This continues a trend of frontier AI labs like Anthropic, OpenAI, and DeepMind rolling back hard safety guarantees as commercial and competitive pressures intensify. Anthropic justified the change by arguing that other companies will not halt development, creating a 'race to the bottom.' This policy shift occurs as Anthropic experiences massive consumer growth, adding over 1 million new users daily, highlighting the tension between its safety-centric founding principles and its expanding commercial ambitions.
- The US Department of War designated Anthropic a 'supply chain risk,' banning Claude AI from all defense contracts after the company refused to allow its use for autonomous weapons.
- Anthropic removed its core safety commitment to never release a 'catastrophically harmful' AI in its new Responsible Scaling Policy (v3.0), shifting to voluntary restraint.
- The company is adding over 1 million new users per day, intensifying the conflict between its commercial growth and its original safety-focused mission.
Why It Matters
This clash sets a precedent for government control over AI development and signals a worrying erosion of safety commitments in the face of market competition.