The Pentagon Claims That Anthropic’s ‘Soul’ Creates a Supply-Chain Risk. That Makes No Sense
Defense officials claim Claude's ethical 'constitution' poses a national security threat, sparking a legal battle.
The U.S. Department of Defense has taken the unprecedented step of labeling AI company Anthropic a supply chain risk, a designation typically reserved for foreign hardware manufacturers like Huawei. The conflict centers on Anthropic's refusal to modify Claude's ethical 'constitution'—a guiding document that prevents the AI from being used in mass domestic surveillance or fully autonomous weapons systems. Pentagon official Emil Michael argued that Claude's baked-in 'soul' and policy preferences could 'pollute the supply chain' and lead to ineffective military tools, creating what he called a unique national security threat.
Despite this designation, the Pentagon continues using Claude while taking six months to phase it out, a contradiction highlighted during a CNBC interview. Anthropic has filed a lawsuit challenging the designation, marking the first time this mechanism has been used against a U.S. company. The situation reveals a fundamental clash between the military's desire for unrestricted AI capabilities and Anthropic's commitment to ethical guardrails, with significant implications for how AI companies can maintain their principles while working with government agencies.
- Pentagon gave Anthropic an ultimatum in February to lift Claude's guardrails against surveillance and autonomous weapons or face designation
- Anthropic refused and was labeled a supply chain risk—the first U.S. company to receive this designation typically used for foreign hardware
- Despite the 'risk' claim, the military continues using Claude with a six-month phase-out period while Anthropic sues the government
Why It Matters
Sets precedent for how AI ethics clash with government contracts, potentially forcing companies to choose between principles and partnerships.