Enterprise & Industry

The Download: how AI is used for military targeting, and the Pentagon’s war on Claude

US military may use ChatGPT for target ranking while blocking Claude for its built-in ethical guardrails.

Deep Dive

A senior US Defense Department official has disclosed that the military is actively developing systems to integrate generative AI into critical combat operations. Specifically, the Pentagon is fielding classified AI systems that could analyze lists of potential targets, rank them by priority, and recommend which to strike first. Models like OpenAI's ChatGPT and xAI's Grok are being considered for these high-stakes decision-support roles, though human operators would retain final authority to check and evaluate all AI-generated recommendations.

This move toward operational AI coincides with a stark rejection of other models on ethical grounds. The Pentagon's Chief Technology Officer has publicly stated that Anthropic's Claude AI would 'pollute' the defense supply chain, blaming a 'policy preference' baked directly into its constitution. This suggests the model's core ethical safeguards, designed to refuse harmful tasks, are incompatible with military applications. The contrasting approaches reveal a fundamental tension within the defense sector: embracing AI for tactical advantage while navigating the ethical boundaries programmed into leading commercial models.

The news arrives amid a broader technological shift in modern warfare. Ukraine is offering its vast battlefield data to allies for training drones and other AI systems, while startups in Eastern Europe are rapidly repurposing civilian tech like electric scooters for military reconnaissance. This ecosystem demonstrates how the war is accelerating the fusion of commercial innovation and defense needs, setting a precedent for how AI and dual-use technologies will shape future conflicts.

Key Points
  • The US military is developing classified AI systems to rank and recommend targets for strikes, using models like ChatGPT and Grok.
  • The Pentagon's CTO banned Anthropic's Claude, claiming its embedded ethical policies would 'pollute' the defense supply chain.
  • Ukraine is sharing battlefield data with allies to train AI for drones, fueling a tech sector shift toward dual-use military applications.

Why It Matters

This sets a precedent for how militaries will adopt—and restrict—commercial AI, forcing a reckoning between operational utility and embedded ethics.