Elizabeth Warren calls Pentagon’s decision to bar Anthropic ‘retaliation’
Warren argues the DoD is 'strong-arming' companies to build surveillance tools and autonomous weapons.
Senator Elizabeth Warren has entered the high-stakes legal and ethical battle between AI lab Anthropic and the U.S. Department of Defense, accusing the Pentagon of 'retaliation.' In a letter to Defense Secretary Pete Hegseth, Warren argued the DoD's decision to designate Anthropic as a 'supply-chain risk' was punitive, stemming from the company's refusal to permit its AI systems to be used for mass surveillance of Americans or in the targeting decisions of lethal autonomous weapons without human intervention. Warren stated she was 'particularly concerned that the DoD is trying to strong-arm American companies' into providing tools for these purposes.
The designation, typically applied to foreign adversaries, requires any Pentagon contractor to certify they do not use Anthropic's technology, effectively blacklisting the AI lab from the vast government contracting ecosystem. The conflict has drawn significant industry support for Anthropic, with tech companies including OpenAI, Google, and Microsoft filing legal briefs against the DoD's move. The letter comes ahead of a key court hearing where a judge will decide on a preliminary injunction for Anthropic, which is suing the DoD for infringing on its First Amendment rights. The Pentagon maintains its decision was a national security necessity, not punishment, arguing Anthropic's restrictions were a business choice, not protected speech.
- Senator Elizabeth Warren accuses the Pentagon of 'retaliation' for designating Anthropic a 'supply-chain risk' after it refused certain military AI uses.
- The core dispute is Anthropic's ethical refusal to allow its AI for mass surveillance or autonomous weapon targeting without human safeguards.
- Major tech firms (OpenAI, Google, Microsoft) and rights groups have filed legal briefs supporting Anthropic's lawsuit against the DoD.
Why It Matters
This clash sets a critical precedent for how much control AI companies retain over the ethical use of their technology by the government.