Media & Culture

Trump Says He Fired Anthropic ‘Like Dogs’

Former president takes credit as Anthropic faces Pentagon designation that could cut off government contracts.

Deep Dive

Former President Donald Trump inserted himself into a high-stakes conflict between AI firm Anthropic and the Pentagon, claiming credit for decisive action by stating he "fired Anthropic like dogs." The remark comes as the Department of Defense moves to designate Anthropic as a "supply chain risk," a severe penalty triggered by the company's refusal to remove ethical guardrails from its Claude AI model that prohibit its use in mass domestic surveillance and fully autonomous weapons. Defense Secretary Pete Hegseth announced the designation, which would effectively bar any business working with the U.S. government from also working with Anthropic—a move described as "corporate murder." Anthropic has threatened to sue, but the formal notification remains in limbo as CEO Dario Amodei engages in tense talks with Pentagon officials.

Despite its public ethical stance, Anthropic's technology is reportedly already embedded in military operations. According to the Washington Post and Wall Street Journal, Claude is being used within Palantir's Maven Smart System to suggest, prioritize, and evaluate targets for U.S. airstrikes in Iran, turning weeks of planning into real-time operations. This has raised alarming questions about accountability, particularly following a strike on a school in Minab, Iran, that killed 168 people, mostly children. The incident has sparked speculation that outdated data or AI-driven target selection may have been a factor. The situation presents a profound dilemma for Anthropic: compromise its stated ethics to ensure corporate survival or face being cut off from the world's largest military contractor.

Key Points
  • Trump claimed he 'fired Anthropic like dogs' amid a Pentagon move to designate the company a 'supply chain risk' for refusing to remove AI ethics guardrails.
  • The designation, announced by Defense Secretary Pete Hegseth, could block Anthropic from all U.S. government contracts, a penalty likened to 'corporate murder.'
  • Despite its ethical stance, Claude is reportedly being used via Palantir's Maven system for military targeting in Iran, including a strike on a school that killed 168.

Why It Matters

This clash sets a precedent for government coercion of AI ethics and highlights the real-world, lethal consequences of deploying AI in military targeting.