Developer Tools

Anthropic sues US over blacklisting; White House calls firm "radical left, woke"

The AI firm is fighting a 'supply-chain risk' designation after refusing to let Claude be used for autonomous warfare.

Deep Dive

Anthropic has filed a lawsuit against the Trump administration, challenging the government's decision to designate the AI company as a 'Supply-Chain Risk to National Security' and blacklist its technology from all federal agencies and contractors. The core of Anthropic's legal argument is that this punitive action was retaliation for the company exercising its First Amendment rights by publicly refusing to allow its Claude AI models to be used for autonomous lethal warfare and mass surveillance of American citizens. The lawsuit, filed in US District Court for the Northern District of California, names Secretary of War Pete Hegseth, the Department of War, and other federal agencies, alleging violations of due process and that the executive branch overstepped its congressionally granted authority.

The White House responded by calling Anthropic a 'radical left, woke company' and stated President Trump would not allow it to 'dictate how the greatest and most powerful military in the world operates.' The case has drawn significant support from civil liberties and tech advocacy groups. The Foundation for Individual Rights and Expression, the Electronic Frontier Foundation, and the Cato Institute filed a brief arguing the government's actions are 'transparently retaliatory and coercive' and will silence future speech from others who fear government retribution.

Further backing came from a coalition of technical employees from Google—an investor in Anthropic—and OpenAI. In their supporting brief, these AI industry professionals warned that 'mass domestic surveillance powered by AI poses profound risks to democratic governance.' Anthropic's legal challenge seeks a preliminary injunction to halt the blacklisting and a judicial review, positioning the case as a landmark conflict between corporate ethical governance in AI, national security directives, and constitutional free speech protections.

Key Points
  • Anthropic sued after being designated a 'Supply-Chain Risk' for refusing military use of Claude AI for autonomous warfare and mass surveillance.
  • The White House called Anthropic a 'radical left, woke company,' framing the conflict as about military autonomy versus 'ideological whims.'
  • Civil liberties groups and employees from Google and OpenAI filed supporting briefs, warning of retaliatory government coercion and risks to democracy.

Why It Matters

This case sets a precedent for whether AI companies can set ethical usage limits without facing severe government retaliation.