Anthropic Denies It Could Sabotage AI Tools During War
The AI firm says it cannot remotely disable or alter its Claude model during military operations.
In a federal court filing, Anthropic has formally denied possessing any capability to remotely sabotage its Claude generative AI model if deployed by the US military. Thiyagu Ramasamy, Anthropic’s head of public sector, wrote that the company "does not maintain any back door or remote 'kill switch'" and lacks the access required to disable the technology or alter its behavior during ongoing operations. This statement directly counters the Pentagon's core argument for designating Anthropic a "supply-chain risk," a move that has blocked the Department of Defense and other federal agencies from using Claude software. The Pentagon has expressed concern that Anthropic could unilaterally disrupt critical military systems, which use Claude for tasks like data analysis and planning, during pivotal moments for national defense.
Anthropic is now fighting the ban with two lawsuits challenging its constitutionality and seeking an emergency reversal. The company argues its technology does not function in a way that allows for such interference, noting that any updates would require approval from both the government and its cloud provider, Amazon Web Services. In a separate filing, Head of Policy Sarah Heck stated Anthropic had offered contractual guarantees that it would not seek to control or veto lawful Pentagon operational decisions. However, negotiations broke down, leaving the future of the military's use of Claude in limbo. A hearing in one case is scheduled for March 24, where a judge could soon decide on a temporary injunction to lift the ban.
- Anthropic's court filing explicitly states it has no 'back door' or remote 'kill switch' to disable Claude AI during military operations.
- The Pentagon has designated Anthropic a supply-chain risk, blocking DoD use over fears the company could sabotage critical systems.
- Anthropic is suing to reverse the ban, with a key hearing on March 24 that could result in a temporary injunction.
Why It Matters
This legal battle sets a precedent for how much control AI companies can retain over their models in sensitive government and military applications.