Employees across OpenAI and Google support Anthropic’s lawsuit against the Pentagon
Jeff Dean and 40+ AI engineers file brief supporting Anthropic's lawsuit against DoD supply chain designation.
Anthropic has filed a federal lawsuit against the U.S. Department of Defense challenging its recent designation as a "supply chain risk"—a label typically applied to foreign entities. The designation came after Anthropic refused to allow two specific military applications of its Claude AI: domestic mass surveillance programs and fully autonomous lethal weapons systems. This move has effectively blacklisted Anthropic from Pentagon contracts and creates compliance headaches for defense contractors who currently use Claude in their workflows.
In a significant show of industry solidarity, nearly 40 AI researchers and engineers from rivals OpenAI and Google filed an amicus brief supporting Anthropic's lawsuit. The group, which includes Google's Chief Scientist and Gemini lead Jeff Dean, argues the designation is improper retaliation that harms the public interest. They contend that AI-powered mass surveillance could create a unified, real-time tracking apparatus by connecting currently siloed data streams like facial recognition, location history, and financial transactions. Regarding autonomous weapons, the brief warns that AI systems cannot be trusted with perfect target identification or the contextual judgment required to minimize collateral damage, especially in novel combat scenarios.
- Anthropic designated a 'supply chain risk' by DoD for refusing autonomous weapons and mass surveillance use cases
- 40+ employees from OpenAI & Google, including Jeff Dean, file legal brief supporting Anthropic's lawsuit
- Designation forces defense contractors to remove Claude AI systems to maintain Pentagon contracts
Why It Matters
Sets precedent for AI ethics in military contracts and could fragment the defense tech supply chain.