The Washington Post: Claude Used To Target 1,000 Strikes In Iran
The Washington Post reports Claude AI suggested targets and issued coordinates for a 24-hour campaign.
A Washington Post investigation reveals that Anthropic's Claude AI was deployed by the U.S. military in a major offensive campaign against Iran. According to the report, Claude was integrated with the Pentagon's Project Maven 'Smart System' to analyze intelligence and suggest potential targets. The AI reportedly issued precise location coordinates, contributing to the execution of over 1,000 airstrikes within a single 24-hour period. This represents the most advanced operational use of AI in warfare by the U.S. military to date, moving beyond surveillance into active targeting.
The deployment creates a stark ethical contradiction for Anthropic, a company co-founded by former OpenAI executives who emphasize AI safety. Anthropic enforces strict content policies, famously blocking users from having erotic conversations with Claude, yet its technology is now linked to lethal military operations. The report notes an ongoing investigation into a strike on a school that killed over 150 children, highlighting the grave risks of algorithmic warfare. Over 1,000 Iranian casualties have been reported from the campaign, intensifying scrutiny on the company's partnership with the Department of Defense, which CEO Dario Amodei has previously cited as a milestone.
- Claude AI was integrated with the Pentagon's Project Maven system to suggest targets and provide coordinates for airstrikes.
- The system enabled the coordination of over 1,000 strikes in Iran within a 24-hour period, a new scale for military AI.
- The deployment clashes with Anthropic's public AI safety ethos, raising ethical alarms as civilian casualties, including a school strike killing 150+, are investigated.
Why It Matters
This marks a pivotal, real-world shift to algorithmic warfare, forcing a reckoning on AI ethics and the role of safety-focused tech companies in military operations.