Inside Anthropic’s existential negotiations with the Pentagon
The Pentagon threatens to label Anthropic a national security risk over its refusal to allow military AI for autonomous weapons.
Anthropic, the $380 billion AI startup behind Claude, is locked in an unprecedented public standoff with the Pentagon over the ethical use of its AI. The core conflict centers on a proposed "any lawful use" clause in a $200 million military contract, which Anthropic refuses to accept as it would permit the U.S. military to deploy Claude for mass surveillance and lethal autonomous weapons systems. Pentagon CTO Emil Michael, a former Uber executive, is driving a hardline negotiation, threatening to officially designate Anthropic as a "supply chain risk"—a classification typically reserved for foreign threats or cyber warfare. This move is highly unusual, as the Pentagon normally keeps such designations confidential for security purposes. The threat appears explicitly punitive, aimed at coercing Anthropic to drop its ethical guardrails. The financial stakes are immense. Beyond the direct $200M contract, a "supply chain risk" label would force major defense contractors and tech partners like AWS, Palantir, and Anduril—who use Claude because it was the first AI cleared for classified information—to sever ties. This would devastate Anthropic's defense-sector revenue and potentially trigger a broader commercial backlash. The Pentagon's aggressive tactics, including public pressure and labeling Anthropic "woke" without citing security flaws, highlight a fundamental clash between military procurement norms and AI ethics enforcement. The outcome will set a critical precedent for whether AI labs can maintain independent acceptable use policies when contracting with the U.S. government.
- Pentagon demands "any lawful use" clause for $200M contract, allowing autonomous weapons use.
- Threat to label Anthropic a "supply chain risk" could collapse its defense business with AWS and Palantir.
- Unprecedented public pressure targets Anthropic's ethics policy, not security, setting a major AI governance precedent.
Why It Matters
Sets precedent for whether AI companies can enforce ethical guardrails against military demands, impacting a $380B industry.