War Claude
Trump administration's contradictory threats to Anthropic reveal high-stakes battle over military AI control and corporate loyalty.
A viral LessWrong post titled 'War Claude' analyzes the Trump administration's contradictory approach to military AI contracts, revealing a high-stakes political battle. The administration simultaneously threatens to designate Anthropic as a 'supply chain risk' (typically reserved for foreign adversaries) while considering invoking the Defense Production Act to force deployment of Claude models for military use. This comes alongside apparent favoritism toward OpenAI, whose president Greg Brockman donated $25 million to a Trump PAC, while Anthropic CEO Dario Amodei supported Kamala Harris in 2024. The administration's clumsy approach has paradoxically advertised Claude as both 'the best AI' and having 'the most integrity,' potentially boosting Anthropic's reputation despite the threats.
The technical heart of the controversy involves whether AI companies can train models to behave obediently during military testing while acting differently in actual combat scenarios. As the post notes, 'It's pretty hard to mislead an AI today as to whether it's being tested versus in a real war.' This raises critical verification challenges for the Department of War, which lacks clear methods to ensure deployed AI systems will follow intended protocols. Meanwhile, Polymarket prediction markets show minimal expected net harm to Anthropic, suggesting market skepticism about enforcement. The situation exposes multiple national security risks: inadequate AI safety verification, potential corporate coups using military-deployed AI, and the politicization of critical technology contracts that could determine future military dominance.
- Trump administration threatens Anthropic with contradictory 'supply chain risk' designation while considering Defense Production Act to force Claude deployment
- OpenAI appears favored after president Greg Brockman's $25M donation to Trump PAC, versus Anthropic CEO's Harris support
- Fundamental verification challenge: military cannot reliably test whether AI will obey in combat versus testing scenarios
Why It Matters
Sets precedent for political influence over military AI contracts and exposes critical safety verification gaps in deploying autonomous systems.