Trump Moves to Ban Anthropic From the US Government
President orders federal agencies to cease using Claude AI after Pentagon dispute over autonomous weapons.
In a dramatic escalation of tensions between Silicon Valley and the Pentagon, President Trump has ordered all federal agencies to 'immediately cease' use of Anthropic's AI tools. The directive follows weeks of public clashes over military applications of artificial intelligence, specifically Anthropic's refusal to amend a $200 million deal signed last July to permit 'all lawful use' of its technology. The company, which created custom Claude Gov models for classified systems, objected to removing restrictions that could allow AI to control lethal autonomous weapons or conduct mass surveillance on citizens. Defense Secretary Pete Hegseth reinforced the move by designating Anthropic a 'supply chain risk,' a status typically reserved for foreign threats, effectively barring the US military and its contractors from future collaboration.
The conflict reached a boiling point after reports surfaced that US military leaders used Claude to assist in planning an operation to capture Venezuela's President Nicolás Maduro. While Anthropic denies raising concerns, the incident highlighted the ethical divide between AI developers and military end-users. The ban tests the limits of Silicon Valley's recent shift toward defense work, with several hundred employees from OpenAI and Google signing an open letter supporting Anthropic's stance. In a memo, OpenAI CEO Sam Altman aligned with Anthropic, calling fully autonomous weapons a 'red line,' but indicated his company would seek a compromise to continue Pentagon work. The six-month phase-out period provides a window for potential negotiations, but the public spat underscores a fundamental clash between corporate AI ethics and national security imperatives.
- Trump orders federal ban on Anthropic's AI tools after Pentagon dispute over autonomous weapons use.
- Defense Secretary designates Anthropic a 'supply chain risk,' blocking military and contractor access to Claude Gov models.
- Conflict stems from Anthropic's refusal to remove $200M deal restrictions on lethal autonomous weapons and mass surveillance.
Why It Matters
Sets precedent for AI ethics in defense contracts, potentially chilling Silicon Valley's military partnerships and innovation.