The US military is still using Claude — but defense-tech clients are fleeing
Claude models power U.S. targeting decisions in Iran conflict as defense contractors rapidly replace them.
Anthropic's Claude AI models are currently being used by the U.S. military for targeting decisions in the ongoing conflict with Iran, even as the company faces a mass exodus of defense industry clients. The situation stems from conflicting government directives: President Trump ordered civilian agencies to discontinue Anthropic products, while the Department of Defense was given a six-month wind-down period. This timeline was upended when the U.S. and Israel launched a surprise attack on Tehran, leaving Claude integrated into operational systems. According to a Washington Post report, Claude works in conjunction with Palantir's Maven system to suggest hundreds of targets, provide precise coordinates, and prioritize them by importance for Pentagon planners.
Despite this active battlefield use, Anthropic's position in defense tech is collapsing. Major contractors like Lockheed Martin are already swapping out Claude models for alternatives, and venture firm J2 Ventures reports that 10 of its portfolio companies are in active processes to replace Claude for defense use cases. The central unresolved issue is whether Defense Secretary Pete Hegseth will formally designate Anthropic as a supply-chain risk, which would trigger legal battles. For now, the result is a stark paradox: one of the world's leading AI models is being systematically removed from the defense industrial base while simultaneously providing critical intelligence support in a live warzone, highlighting the complex and often contradictory pressures facing AI companies operating in the national security sector.
- Claude AI is integrated with Palantir's Maven system for 'real-time targeting and target prioritization' in U.S. strikes on Iran.
- Major defense contractors like Lockheed Martin are actively replacing Claude with competitor models following political directives.
- The situation creates a legal gray area as no official 'supply-chain risk' designation has been made, allowing continued military use.
Why It Matters
Highlights the complex, high-stakes conflict between AI ethics, government policy, and real-world military necessity for tech companies.