Open Source

Anthropic Drops Flagship Safety Pledge

The AI safety startup quietly removed its unique governance structure designed to prevent AI takeover.

Deep Dive

Anthropic, the AI safety startup founded by former OpenAI researchers, has quietly removed its flagship 'Long-Term Benefit Trust' (LTBT) governance pledge from its corporate website. The LTBT was a unique and much-publicized structure established in 2023, designed to give a board of independent, security-vetted trustees the power to override company leadership if they believed AI development was veering toward catastrophic risk. Its removal, first spotted by users on social media, represents a significant shift in Anthropic's public-facing commitment to 'safety over profit' as it aggressively scales its Claude models and competes for enterprise customers against OpenAI and Google.

While Anthropic has not issued a formal statement, the move is seen by industry observers as a pragmatic step toward streamlining governance as the company grows. The LTBT structure was complex and untested, potentially creating investor uncertainty. This follows a period of rapid commercial expansion for Anthropic, including a $4 billion investment from Amazon and the launch of its Claude 3.5 Sonnet model. The change suggests a maturation from a pure research lab into a competitive commercial entity, where operational agility may be prioritized over theoretical safety mechanisms. It raises questions about how the company will institutionalize its founding safety principles as external pressure to monetize its $7.3B in funding intensifies.

Key Points
  • Anthropic removed its 'Long-Term Benefit Trust' pledge, a board designed to override leadership for safety reasons.
  • The trust was a core part of its 2023 founding charter to prevent profit motives from overriding catastrophic risk mitigation.
  • The change coincides with Anthropic's commercial scaling, including a $4B Amazon deal and Claude 3.5 model launches.

Why It Matters

Signals a pivot from theoretical safety research to commercial competition, testing the balance between profit and AI risk mitigation.