EXCLUSIVE: Anthropic Drops Flagship Safety Pledge
The AI safety startup quietly removed its flagship governance structure for board oversight.
Anthropic, the AI safety company founded by former OpenAI researchers, has quietly removed its flagship 'Long-Term Benefit Trust' (LTBT) pledge from its core legal and public-facing documents. The LTBT was a novel governance structure announced in 2023, designed as a 'constitutional' check on the company's board. It proposed creating a separate trust, initially with five members, that would gain the power to appoint and remove a majority of Anthropic's corporate directors if the company's actions were deemed to conflict with its core safety mission, particularly regarding the development of potentially catastrophic AI. This removal, first spotted by changes to the company's website and legal filings, represents a major strategic retreat from one of its most distinctive and public commitments to long-term AI safety oversight.
The company has not issued a formal statement, but sources suggest the complex legal and operational hurdles of implementing the LTBT as Anthropic scaled were a primary factor. The trust structure was seen as potentially cumbersome for a company now competing fiercely with OpenAI and Google, having raised over $7 billion and launched its Claude 3.5 model family. This shift highlights the tension between idealistic safety frameworks and the practical demands of the commercial AI race. It raises questions about how 'safety-first' startups will govern themselves as they become large, commercially-driven entities, moving the industry debate from theoretical structures to real-world implementation challenges.
- Removed 'Long-Term Benefit Trust' pledge from website and legal docs, a core 2023 safety governance promise.
- The LTBT was designed to let an external trust board override company leadership on safety threats.
- Move signals prioritization of commercial scaling over complex governance as Anthropic competes with billions in funding.
Why It Matters
Shows the practical challenges of implementing idealistic AI safety governance in a fiercely competitive, capital-intensive market.