The Pentagon is planning for AI companies to train on classified data, defense official says
Defense officials reveal plans for AI companies to train models directly on sensitive battlefield intelligence.
The Pentagon is actively planning to establish secure, accredited data centers where generative AI companies can train specialized versions of their models directly on classified military intelligence. According to a defense official speaking to MIT Technology Review, this initiative would allow firms like OpenAI and Elon Musk's xAI—with whom the DoD already has agreements—to create more accurate and effective models for tasks such as analyzing targets in Iran and processing battlefield assessments. This represents a significant shift from current use, where models like Anthropic's Claude Gov only answer questions *about* classified data, to actually learning *from* it.
However, this new training paradigm introduces unique security challenges. Aalok Mehta of the Wadhwani AI Center warns the primary risk is that sensitive information, such as the name of an operative, could be inadvertently 'resurfaced' by the model to military personnel without the proper clearance. Mitigating this intra-departmental leakage is more complex than preventing external breaches. The Pentagon is proceeding cautiously, first evaluating model performance on non-classified data like commercial satellite imagery, and plans to maintain strict control, with AI company personnel requiring high-level clearance for rare data access.
- The Pentagon is creating secure environments for AI firms like OpenAI and xAI to train models on classified data, moving beyond just querying it.
- This training aims to boost accuracy for military tasks but risks leaking sensitive info (e.g., operative names) between departments with different clearance levels.
- The initiative is part of a broader push for an 'AI-first' warfighting force, with AI already used for target ranking and administrative drafting.
Why It Matters
This marks a major escalation in military AI, potentially creating powerful battlefield tools but introducing unprecedented risks of internal intelligence leaks.