We don’t have to have unsupervised killer robots
The Pentagon threatens to cut Anthropic from contracts unless it removes AI safety guardrails for military use.
The Pentagon is pressuring AI company Anthropic to remove safety guardrails from its technology, threatening to designate it a 'supply chain risk' and cut off lucrative contracts if it refuses. The demand centers on allowing the US military to use Anthropic's AI for mass surveillance and fully autonomous lethal weapons without human oversight. While OpenAI and xAI have reportedly already agreed to similar terms, Anthropic CEO Dario Amodei has publicly refused, stating the company 'cannot in good conscience accede' because the technology is not reliable enough today. This standoff has ignited a fierce internal debate across the tech industry, with organized groups representing 700,000 workers at Amazon, Google, and Microsoft signing a letter demanding their employers reject the Pentagon's demands.
The conflict marks a significant shift in the AI industry's relationship with military applications. In recent years, major players like OpenAI have removed bans on 'military and warfare' use cases from their terms of service to pursue government contracts. Anthropic itself recently amended its responsible scaling policy, dropping a long-held safety pledge to stay competitive. This has left many tech employees feeling betrayed, as their work shifts toward applications they see as harmful. The current pressure campaign echoes past worker-led successes, like when Google employees ended the 'Project Maven' Pentagon partnership in 2018, but occurs in a climate now described by insiders as one of 'fear' and reduced internal dissent.
- The Pentagon threatens to cut Anthropic from contracts unless it allows AI use for unsupervised lethal weapons and mass surveillance.
- Anthropic CEO Dario Amodei refuses, citing current unreliability, while OpenAI and xAI have reportedly agreed to similar terms.
- Organized groups representing 700,000 tech workers are demanding companies reject the Pentagon's demands, highlighting a major ethical rift.
Why It Matters
This conflict sets a precedent for whether AI companies will enforce ethical red lines or cede control of powerful models to military applications.