Diary of a "Doomer": 12+ years arguing about AI risk (part 2)
Stuart Russell, Stephen Hawking, and Elon Musk warned about AI risk 13 years ago.
David Scott Krueger's 'Diary of a Doomer' part 2 traces the evolution of AI extinction risk (x-risk) advocacy from 2013 to 2016. He highlights how Nick Bostrom's 'Superintelligence' (2014) sparked debate, though AI researchers dismissed it as 'philosophy.' Stuart Russell, co-author of the leading AI textbook, began speaking out in 2013, joined by Stephen Hawking and Max Tegmark in a 2014 article. Elon Musk's 'summoning the demon' comment and Bill Gates' endorsement of Bostrom's book added weight. Yet the Deep Learning trio—Hinton, Bengio, LeCun—poo-pooed concerns, seeing them as a threat to their field's momentum. By 2016, 'Concrete Problems in AI Safety' and the first AI safety workshop at a top ML conference brought legitimacy. Elon Musk co-founded a nonprofit for AI safety. Krueger notes that while awareness has grown, it feels like a slow, uphill battle that may finally be going mainstream.
- Stuart Russell started AI x-risk advocacy in 2013, before Bostrom's 'Superintelligence'.
- Stephen Hawking, Elon Musk, and Bill Gates publicly warned about AI risks by 2014.
- By 2016, the first AI safety workshop at a top ML conference and 'Concrete Problems in AI Safety' brought legitimacy.
Why It Matters
Highlights how early warnings from top minds are finally shifting AI safety from fringe to mainstream.