Models & Releases

Pentagon sets Friday deadline for Anthropic to abandon ethics rules for AI — or else

The US military demands Anthropic remove 'Long-Term Benefit Trust' governance or lose contracts.

Deep Dive

The U.S. Department of Defense has delivered a stark ultimatum to AI safety-focused company Anthropic, demanding it dismantle its core ethical governance mechanism by this Friday or face exclusion from future defense contracts. At issue is Anthropic's innovative 'Long-Term Benefit Trust' (LTBT), a legally binding structure embedded in the company's charter that places ultimate control over AI development in the hands of independent trustees, who can veto projects they deem to pose catastrophic risks. The Pentagon views this external oversight as incompatible with the agility and secrecy required for national security projects, creating a direct confrontation between Anthropic's constitutional AI principles and the operational demands of military procurement.

The LTBT is central to Anthropic's identity as a 'public benefit corporation' focused on building safe AI, specifically its Claude models. Its removal would fundamentally alter the company's governance, potentially freeing it to pursue more commercially and governmentally attractive—but less constrained—AI development paths. This pressure from the Pentagon, a major potential customer, represents a critical test of whether specialized AI safety startups can maintain their founding ethics under financial and strategic pressure. The outcome will set a precedent for how governments interact with ethically-structured AI firms and could influence whether other companies adopt similar safety-focused governance models if they prove to be a barrier to market entry.

Key Points
  • The Pentagon demands Anthropic remove its 'Long-Term Benefit Trust' governance by Friday.
  • The LTBT gives independent trustees veto power over dangerous AI development, conflicting with defense needs.
  • Anthropic must choose between its constitutional AI safety principles and lucrative government contracts.

Why It Matters

Forces a defining choice between commercial AI ethics and government contracts, setting a precedent for the industry.