The U.S. used Anthropic AI tools during airstrikes on Iran
Despite a presidential order to cease cooperation, CENTCOM used Claude for targeting and simulations.
In a stark contradiction of official policy, the United States utilized Anthropic's Claude AI to support military strikes against Iran, mere hours after President Trump publicly ordered all federal agencies to cease cooperation with the AI company. According to sources familiar with the operations, command centers including U.S. Central Command (CENTCOM) continued to employ the Claude tool for critical functions like intelligence analysis, identifying potential targets, and running combat simulations. This incident underscores the entrenched, operational role of advanced AI in modern warfare, even amidst high-level political and security disputes between the government and its technology providers.
The revelation highlights a months-long, escalating conflict between Anthropic and the Pentagon over the military's use of its AI models. The Department of Defense had previously determined Anthropic posed a supply chain and security risk, culminating in Friday's presidential directive. The immediate, covert continuation of Claude's use for a major kinetic operation demonstrates both the tool's perceived tactical value and the complex reality of disentangling cutting-edge AI from national security infrastructure. It sets a precedent for the friction between AI ethics, corporate policy, and military necessity, raising urgent questions about oversight and control in an era where AI agents directly inform lethal decisions.
- CENTCOM used Anthropic's Claude for target ID and combat sims for Iran strikes.
- Operations continued hours after a presidential order banned agency cooperation with Anthropic.
- The DOD had labeled Anthropic a security threat prior to the disputed strikes.
Why It Matters
Shows AI is deeply embedded in lethal military ops, creating ethics and control challenges for governments and AI firms.