Media & Culture

Trump’s Potential New AI Executive Order May Take a Swipe at Anthropic

New draft order could cement Anthropic's pariah status with the Pentagon.

Deep Dive

Reports indicate the Trump administration is drafting an executive order that would establish an AI “working group” composed of government officials and tech industry members to review unreleased AI models. The order is reportedly considering a provision that bars companies from “interfering” with government uses of AI. This language is widely seen as a response to the Pentagon's blacklisting of Anthropic, which refused to remove safety guardrails on its models that prevented mass surveillance and full weapon automation. The working group would be similar to one forming in the U.K., also spurred by security vulnerabilities exposed by Anthropic's Claude Mythos Preview model.

Microsoft, xAI, and Google have already signed deals with a Biden-era Commerce Department arm, CAISI, to inspect new models. Anthropic previously signed a similar agreement but has since been designated a supply chain risk by the Pentagon. The exact impact of the order remains unclear—it could reinforce Anthropic's exclusion, offer a resolution path, or be purely symbolic. The White House has dismissed pre-announcement discussions as “speculation.”

Key Points
  • Trump’s order may create an AI working group to review unreleased models, similar to a U.K. initiative.
  • A provision prohibiting “interference” with government AI use could target Anthropic's refusal to lift guardrails on surveillance and weapon automation.
  • Anthropic was blacklisted by the Pentagon and designated a supply chain risk, forcing contractors to cut ties.

Why It Matters

AI companies must now weigh safety guardrails against government contracts, with potential precedent for all future military AI use.