Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter
Over 360 employees sign open letter as military threatens to invoke Defense Production Act.
Anthropic is locked in a high-stakes standoff with the U.S. Department of Defense, which is demanding unrestricted access to the company's Claude AI technology. The military has set a deadline for compliance, threatening to either label Anthropic a national security supply chain risk or invoke the Defense Production Act (DPA) to compel cooperation. In response, over 360 employees from rival AI giants Google and OpenAI have signed an open letter calling on their own leadership to publicly support Anthropic's ethical red lines against using AI for mass domestic surveillance and fully autonomous weaponry. The letter argues the Pentagon is attempting to divide the AI industry by pressuring companies individually.
Informal statements from OpenAI CEO Sam Altman and Google DeepMind's Jeff Dean suggest sympathy for Anthropic's position, with both criticizing mass surveillance. The conflict highlights a growing fracture between the AI industry's stated ethical principles and government demands for military and intelligence applications. While companies like Google and OpenAI already allow limited, unclassified military use of models like Gemini and ChatGPT, Anthropic's firm stance—and the employee-led solidarity movement—sets a precedent for collective resistance. The outcome could define whether AI companies can maintain independent ethical guardrails or will be compelled to become arms contractors.
- Over 300 Google and 60 OpenAI employees signed an open letter supporting Anthropic's ethical stand.
- The Pentagon threatened to invoke the Defense Production Act or label Anthropic a 'supply chain risk'.
- Anthropic CEO Dario Amodei refuses to allow Claude AI use for mass surveillance or autonomous weapons.
Why It Matters
Sets a precedent for whether AI firms can enforce ethical boundaries against government demands for military use.