Dario probably doesn't believe in superintelligence
Viral analysis of 2013-2017 statements challenges core assumption about Anthropic's mission.
A viral analysis on LessWrong by user RobertM is challenging a foundational assumption about Anthropic, the AI safety company behind Claude. The post argues that Anthropic co-founder and CEO Dario Amodei may not actually believe in the concept of superintelligence—the idea that returns to intelligence past the human level are large and practically achievable. This contradicts the common perception that Anthropic's intense focus on AI safety is driven by a belief in an imminent, world-altering superintelligence.
The evidence presented spans from 2013 to 2017. It includes a 2013 transcript where Amodei, then a science advisor, suggested a 'large fraction' of potentially world-ending AIs might make fatal mistakes first, a framing the author argues is inconsistent with a robust superintelligence concept. The 2016 seminal paper 'Concrete Problems in AI Safety,' which Amodei co-authored, explicitly argued against focusing on 'extreme scenarios' like superintelligent agents, advocating instead for practical, near-term safety research. Comments from a 2017 EAG panel further suggested Amodei was 'deeply concerned' about both developing and *not* developing advanced AI, citing risks like geopolitical instability and biological threats, implying a broader risk portfolio beyond a singular superintelligence focus.
The analysis has sparked debate because it questions the narrative lens through many interpret Anthropic's actions. If Amodei and by extension Anthropic are motivated less by a specific belief in transformative superintelligence and more by a generalist, pragmatic approach to AI risk mitigation, it could reshape public and investor understanding of the company's long-term strategy and priorities within the competitive AI landscape.
- Analysis cites a 2013 discussion where Amodei suggested potentially dangerous AIs might fail first, a stance argued to be inconsistent with strong superintelligence belief.
- Points to Amodei's 2016 'Concrete Problems' paper advocating for practical, near-term safety research over 'extreme' superintelligence scenarios.
- References 2017 panel comments showing concern about civilizational risks beyond AI, like biotech and geopolitics, suggesting a broader risk model.
Why It Matters
Challenges core assumptions about a leading AI safety company's motivations, potentially affecting public trust and strategic perception.