Anthropic won’t budge as Pentagon escalates AI dispute
The Defense Department threatens to invoke the Defense Production Act, escalating a clash over AI ethics and national security.
The Pentagon has issued Anthropic a Friday deadline to provide the U.S. military with unrestricted access to its AI models, escalating a high-stakes dispute over AI ethics and national security. Defense Secretary Pete Hegseth told CEO Dario Amodei the government will either designate Anthropic a 'supply chain risk'—a label typically for foreign adversaries—or invoke the Defense Production Act (DPA) to compel model customization for military use. This move challenges Anthropic's long-standing refusal to allow its technology for mass surveillance or fully autonomous weapons, setting a precedent where corporate usage policies clash directly with executive authority. The DPA, last used during COVID-19 to force production of ventilators and masks, would see a significant expansion in scope if deployed to override AI safety guardrails.
Anthropic, currently the only frontier AI lab with classified Department of Defense access, is refusing to compromise its ethical policies, creating a critical single-point failure for Pentagon AI capabilities. Despite a reported backup deal with xAI's Grok, experts note the DOD lacks immediate redundancy, violating a National Security Memorandum to avoid dependence on one classified-ready AI system. The confrontation unfolds amid ideological friction, with administration figures like AI czar David Sacks criticizing Anthropic's policies as 'woke.' Legal scholars warn this aggressive posture undermines America's stable business environment, signaling that political disagreement could trigger government action to 'put you out of business.' The outcome will define the limits of corporate autonomy in the age of sovereign AI.
- Pentagon threatens to invoke the Defense Production Act—a law last used for COVID-19 medical supplies—to force Anthropic to tailor AI models for military use, marking a dramatic expansion of its application.
- Anthropic is the DOD's sole provider of classified-ready frontier AI, creating a critical single-vendor dependency that violates a Biden-era National Security Memorandum and explains the Pentagon's aggressive stance.
- The core dispute centers on Anthropic's ethical guardrails prohibiting mass surveillance and autonomous weapons, which Pentagon officials argue should be governed by U.S. law, not private company policies.
Why It Matters
This clash sets a precedent for government authority over private AI ethics, potentially destabilizing the U.S. as a predictable hub for tech investment and innovation.