Former Military Officials, Academics, and Tech Policy Leaders Denounce Pentagon’s Tactics Against Anthropic
Over two dozen former military and tech leaders call the supply chain risk designation a 'dangerous precedent.'
A significant coalition of former national security leaders and tech policy experts is pushing back against the Pentagon's controversial attempt to blacklist AI company Anthropic. In a letter addressed to key members of the House and Senate Armed Services Committees, over two dozen signatories—including former CIA Director Michael Hayden, retired Vice Admiral Donald Arthur, and former Under Secretary of the Army Brad Carson—condemn the Defense Department's decision to label Anthropic a supply chain risk. The conflict stems from Anthropic's refusal to loosen its AI safety guardrails for military applications involving autonomous lethal weapons and mass domestic surveillance, a stance that reportedly angered Defense Secretary Pete Hegseth and President Donald Trump. The letter calls the designation an "inappropriate use of executive authority" that departs from its intended purpose of countering foreign adversaries.
The signatories argue that Anthropic's ethical positions on autonomous weapons and surveillance are mainstream and legally grounded, aligning with the Geneva Conventions and the Fourth Amendment. They warn that weaponizing supply chain risk designations against a transparent domestic company sets a dangerous precedent that undermines U.S. innovation and creates regulatory uncertainty that "no serious entrepreneur or investor can build around." The letter urges Congress to establish clear policies governing military AI use. While the Pentagon has not formally notified Anthropic, the company is reportedly still negotiating with the Defense Department, leaving its future government business in doubt amid this high-stakes standoff between ethical AI development and national security procurement.
- A coalition of 25+ former officials, including ex-CIA Director Michael Hayden, condemns the Pentagon's 'supply chain risk' designation of Anthropic as a dangerous misuse of authority.
- The conflict centers on Anthropic's refusal to weaken AI guardrails for autonomous lethal weapons and mass surveillance, principles the letter argues are consistent with international and U.S. law.
- Signatories warn that blacklisting a domestic AI innovator weakens U.S. competitiveness and creates a chilling regulatory environment for tech entrepreneurs and investors.
Why It Matters
This clash sets a precedent for how the U.S. government regulates ethical AI, impacting innovation, defense contracts, and the global race for AI leadership.