AI Safety

Anthropic Officially, Arbitrarily and Capriciously Designated a Supply Chain Risk

The Department of War issues a narrow but punitive SCR designation against Anthropic, citing refusal to grant unfettered access to Claude.

Deep Dive

The U.S. Department of War (DoW) has officially designated AI lab Anthropic a supply chain risk (SCR), a major escalation in a dispute over military access to the company's Claude AI models. According to a detailed account on LessWrong, the DoW demanded 'unfettered access' to Claude and, when Anthropic refused, threatened to use the Defense Production Act (DPA) and issue a broad SCR designation. The initial threat, described as 'corporate murder,' has been scaled back to a narrower, official notification under 10 USC 3252 that primarily bars Anthropic from direct government contracts. The author, Zvi, contends this action is 'arbitrary and capricious,' serving as pure punishment for the company asserting control over its private property, especially after the DoW signed a deal with OpenAI that reportedly violates the same principles Anthropic defended.

The technical basis for the SCR is the claim that the U.S. cannot rely on a product from a company that might object to certain uses, a rationale the author calls a pretext. The immediate impact is limited but strategically punitive: vendors maintaining codebases for DoW contracts may be barred from using Claude, and startups eyeing the DoW as a customer may preemptively avoid Anthropic's technology. While the narrowly scoped order prevents catastrophic business damage, it represents a significant government overreach, setting a dangerous precedent that any enterprise software vendor could face similar retaliation for not complying with military demands. The situation remains fluid, with potential for further 'jawboning' or legal challenges, but the core conflict highlights the growing tension between AI developers' operational principles and national security imperatives.

Key Points
  • The U.S. Department of War designated Anthropic a supply chain risk (SCR) after the company refused to grant 'unfettered access' to its Claude AI models.
  • The official, narrower SCR under 10 USC 3252 bars Anthropic from direct government contracts, avoiding the initially threatened 'corporate murder' but still acting as punishment.
  • The author argues this sets a dangerous precedent, allowing the government to retaliate against companies that control how their private property is used.

Why It Matters

Sets a precedent for government retaliation against AI firms that resist military demands, impacting enterprise adoption and startup strategy.