AI Safety

Diary of a "Doomer": 12+ years arguing about AI risk (part 1)

A researcher recounts discovering Geoffrey Hinton's 2012 neural networks course that changed his career trajectory.

Deep Dive

AI researcher David Scott Krueger has published the first part of a personal account detailing his 12+ years as an AI 'doomer'—someone concerned about existential risks from artificial intelligence. The narrative begins in late 2012 when Krueger, recovering from heartbreak and considering graduate school options, stumbled upon Geoffrey Hinton's neural networks course on Coursera. He had no idea Hinton was 'the Godfather of Deep Learning' or that AlexNet had just revolutionized computer vision, but the course revealed that deep learning actually worked, contradicting the conventional 2009 wisdom he'd heard that neural networks 'don't work.'

Krueger describes being 'blown away' by demonstrations of neural networks generating text and creating new words, seeing it as 'artificial creativity.' He recognized the implications immediately: deep learning's use of distributed, hierarchical, and learned representations meant Real AI (AGI) was no longer a century away but potentially just decades. This realization, occurring in a single afternoon, shifted his career path from aspiring musician to dedicated AI safety researcher. He joined the field when it was still fringe, with only a handful of research groups, but with clear evidence that scaling deep learning could be the path to transformative AI.

Key Points
  • The account begins with the pivotal 2012 discovery of Geoffrey Hinton's Coursera course on neural networks, following AlexNet's breakthrough win in image recognition.
  • Krueger realized deep learning's hierarchical, learned representations meant AGI timelines could shrink from centuries to decades, prompting an immediate career shift into AI safety.
  • He entered the field when it was still a niche topic, driven by concern over existential risk rather than commercial potential.

Why It Matters

Offers a firsthand perspective on the early moments that convinced researchers of AI's rapid, potentially risky trajectory, informing current safety debates.