AI CEOs are not saying it's dangerous just to hype their companies
New post debunks theory that AI CEOs hype danger just to sell products.
A new analysis on LessWrong pushes back against a common critique: that when CEOs like Sam Altman (OpenAI) or Dario Amodei (Anthropic) warn about AI's existential dangers, they are merely hyping their own companies' products. The author, Henry Ajder, argues this theory is implausible, noting the AI safety movement predates modern frontier labs. Concerns from figures like Eliezer Yudkowsky and organizations like MIRI (founded 2010) were established well before models like GPT-2 made headlines.
Ajder points to costly signals that labs take the risk seriously, such as OpenAI's original non-profit structure, delayed model deployments, and internal safety research budgets. The argument also highlights that warnings come from individuals with no apparent financial incentive, like academics Geoffrey Hinton and Yoshua Bengio, or former OpenAI researcher Daniel Kokotajlo, who forfeited equity to speak out. If danger-talk were purely marketing, the author contends, you'd expect all labs to use it and for it to peak during fundraising, patterns not seen in practice.
- AI safety concerns predate modern labs, with MIRI founded in 2010 and early writings from Eliezer Yudkowsky.
- Labs show costly commitment via safety research, delayed releases, and structures like OpenAI's original non-profit board.
- Warnings come from researchers with no equity, like Geoffrey Hinton, and employees who quit at personal cost.
Why It Matters
Understanding the sincerity of risk warnings is crucial for shaping effective regulation and public discourse on AGI.