Media & Culture

President Trump bans Anthropic from use in government systems

Executive order prohibits federal agencies from using Claude 3.5 and other Anthropic models.

Deep Dive

The Trump administration has issued a sweeping executive order prohibiting all federal agencies from using Anthropic's AI models, including the recently launched Claude 3.5 Sonnet. The order, effective immediately, bans both current and future deployments of Claude AI across government systems, citing unspecified national security concerns. Agencies have been given 90 days to identify and remove existing Anthropic integrations from their technology stacks. This represents the most significant government restriction against a major US AI provider to date, following previous bans targeting Chinese companies like Huawei and TikTok.

The ban specifically names Anthropic's Claude 3.5 family of models, which have gained popularity in government IT circles for their strong security features and constitutional AI approach. The order could impact dozens of pilot programs across agencies including the Department of Defense, Veterans Affairs, and various research institutions. While the administration hasn't provided detailed technical justification, sources suggest concerns revolve around data sovereignty and the potential for foreign influence through Anthropic's investors. The move creates immediate uncertainty for government contractors who had built solutions on Claude's API and may accelerate adoption of competing models from OpenAI and Google in the public sector.

Key Points
  • Executive order bans all Anthropic models including Claude 3.5 from federal systems
  • Agencies have 90 days to remove existing deployments citing national security
  • First major US AI provider to face government-wide ban, following Chinese company restrictions

Why It Matters

Sets precedent for government AI procurement and could reshape the $3B federal AI market, favoring OpenAI and Google.