Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans
Leaked demos show AI chatbots analyzing satellite intel and nominating targets for airstrikes.
Leaked Palantir software demos reveal how the US military is likely using AI chatbots like Anthropic's Claude within its Project Maven platform to generate war plans and targeting recommendations. The Maven Smart System, managed by the National Geospatial-Intelligence Agency (NGA), applies computer vision algorithms to satellite imagery to automatically detect objects like people and vehicles, which it classifies as potential "enemy systems." A key feature called the "AI Asset Tasking Recommender" then proposes which specific bombers and munitions should be assigned to which nominated targets, facilitating what the military calls "target intelligence data" messaging.
This integration occurs despite an ongoing legal dispute between Anthropic and the Pentagon. In February, Anthropic refused the government unconditional access to Claude, insisting it not be used for mass surveillance or fully autonomous weapons. The Pentagon responded by labeling Anthropic a "supply-chain risk," prompting the startup to file lawsuits alleging illegal retaliation. While Palantir announced its Claude integration in November 2024 to help analysts find "data-driven insights," neither company has specified which Pentagon systems use it, even as reports indicate its role in overseas operations, including the war in Iran and the capture of Venezuelan president Nicolás Maduro.
- Palantir's Maven platform uses AI (reportedly Claude) for computer vision on satellite imagery to identify enemy systems.
- The system's "AI Asset Tasking Recommender" nominates targets and suggests specific bombers and munitions for strikes.
- This happens amid a legal fight where Anthropic sued the Pentagon after being labeled a risk for refusing unconditional weapons use.
Why It Matters
This showcases the real-world, high-stakes deployment of generative AI for military decision-making, testing ethical boundaries.