Enterprise & Industry

Elon Musk’s xAI Signs Deal to Bring Grok Into Classified Military Systems

Elon Musk's AI firm embraces 'all lawful use' standard for weapons and intelligence work.

Deep Dive

Elon Musk's xAI has secured a pivotal agreement to deploy its Grok AI model within the US military's top-secret classified networks, marking a significant shift in the Pentagon's AI partnerships. The deal, confirmed by a Defense official to Axios, allows Grok to be used in high-stakes environments including weapons development and intelligence analysis. xAI's willingness to accept the Department of Defense's 'all lawful use' standard without reservation paved the way for the contract, while other AI firms hesitated. This move directly displaces Anthropic, whose Claude model was the first authorized on classified networks but is now being removed after a six-month phase-out. The relationship fractured over Anthropic's refusal to cross two specific ethical 'red lines': enabling mass domestic surveillance and creating fully autonomous weapons systems, which the Trump administration labeled 'corporate virtue-signaling'.

In contrast, OpenAI has managed to secure its own Pentagon agreement for classified work by implementing technical guardrails, such as a 'cloud-only' deployment, to prevent use in autonomous lethal weapons. The transition away from Claude, which was deeply embedded in operations like the 2026 raid in Venezuela, presents a major logistical challenge for the Pentagon, with officials admitting the offloading process will be 'very difficult.' Performance concerns also linger, as The New York Times reports the xAI model is 'not considered as advanced or as reliable as Anthropic's.' As the clock ticks on the six-month transition, the military is also reportedly close to a deal with Google for its Gemini model, scrambling to ensure its AI strategy doesn't falter during this great vendor swap.

Key Points
  • xAI signed a deal allowing Grok AI to be used in classified military systems for weapons development and intelligence, accepting the Pentagon's 'all lawful use' standard.
  • Anthropic is being phased out as a 'supply chain risk' after refusing to allow Claude AI for mass domestic surveillance or fully autonomous weapons, triggering a 6-month removal from all federal agencies.
  • OpenAI secured a separate Pentagon deal using technical 'cloud-only' architecture to guard against unethical uses, while the military also pursues Google's Gemini amid a difficult transition from Claude.

Why It Matters

This reshapes the military AI industrial complex, prioritizing vendor compliance over ethical constraints for critical national security applications.