Developer Tools

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else

Defense Secretary Hegseth gives Anthropic until Friday to agree to unfettered military use or face supply chain ban.

Deep Dive

US Defense Secretary Pete Hegseth has issued a stark ultimatum to AI company Anthropic, threatening to invoke the Defense Production Act and cut the $380 billion startup from the Pentagon's supply chain unless it agrees by Friday to allow its Claude models to be used for "all lawful military applications." This dramatic escalation stems from Anthropic's refusal to grant unfettered access for classified missions, including potential domestic surveillance and operations without direct human control. The threat highlights a deepening rift between the White House's push for aggressive AI military adoption and Anthropic's cautious, ethics-driven approach, which Trump's AI tsar David Sacks has derided as "woke."

Anthropic's Claude has been a key tool for classified work through its partnership with Palantir, making a ban an extreme step typically reserved for foreign adversaries. Hegseth is already negotiating with rivals like Google, OpenAI, and Elon Musk's xAI—whose Grok model is reportedly "on board"—to replace Anthropic. Invoking the Defense Production Act, last used for pandemic supplies and critical minerals, would legally compel Anthropic's cooperation, framing its technology as critical to national defense. The standoff forces a legal and ethical reckoning for the AI industry, testing whether companies can maintain usage policies that conflict with Pentagon objectives in an accelerating global AI arms race.

Key Points
  • Defense Secretary Hegseth gave Anthropic a Friday deadline to agree to unrestricted military use of its Claude AI or face removal from the Pentagon supply chain.
  • The Pentagon threatened to invoke the Cold War-era Defense Production Act, which would legally compel Anthropic's cooperation for national defense.
  • Anthropic's refusal centers on ethical concerns over use in classified missions, domestic surveillance, and lethal autonomous operations, while the Pentagon is already courting rivals like OpenAI and xAI.

Why It Matters

Forces a precedent on whether AI companies can ethically constrain military use of their models against government demands in a global AI arms race.