Defense secretary Pete Hegseth designates Anthropic a supply chain risk
Defense Secretary Pete Hegseth blacklists AI firm Anthropic after it refused to allow autonomous lethal weapons use.
In a dramatic escalation of tensions between the U.S. military and the AI industry, Defense Secretary Pete Hegseth has designated Anthropic—the company behind the Claude AI models—as a 'supply-chain risk.' This unprecedented move, following a direct mandate from President Donald Trump, came after Anthropic refused to comply with a Pentagon ultimatum to allow its Claude AI to be used for 'all legal purposes,' including the development of autonomous lethal weapons systems and mass surveillance programs without human oversight. The designation, historically reserved for foreign adversaries, immediately threatens major tech contractors like Palantir and Amazon Web Services (AWS), which integrate Claude into their Pentagon work, giving them six months to divest from Anthropic products.
Anthropic has vowed to fight the designation in court, arguing that Secretary Hegseth lacks the statutory authority to broadly blacklist companies merely for having a commercial relationship with Anthropic. The company contends the risk designation can only legally apply to the specific use of Claude within Department of Defense contracts, not to a contractor's other business activities. This clash represents a fundamental conflict between the Pentagon's drive for unrestricted AI capabilities in warfare and the ethical guardrails—rooted in Anthropic's 'effective altruism' principles—that leading AI labs are attempting to establish. The outcome will set a critical precedent for government authority over commercial AI and determine whether companies can legally refuse to build certain military applications.
- Defense Secretary Hegseth issued the designation after Anthropic refused a Friday deadline to permit autonomous weapons use.
- The move could disrupt major defense contractors like Palantir and AWS, which have six months to stop using Claude AI.
- Anthropic calls the action an unlawful overreach and plans a legal challenge, stating it only applies to specific DoD contract work.
Why It Matters
Sets a major precedent for government power to compel AI companies to build military tech, testing the limits of corporate ethical policies.