AI risk was not invented by AI CEOs to hype their companies
One researcher's decade-old journey shows AI risk wasn't invented by tech CEOs
A common narrative claims that warnings about advanced AI threatening human existence were invented by AI CEOs to hype their products. This author directly refutes that by tracing their own journey into AI safety starting in 2008—years before any prominent AI companies had CEOs. They first contacted Eliezer Yudkowsky in 2008 to understand his reasoning for prioritizing AI risk over causes like climate change. By 2009, they were living with a small community of about twenty AI safety researchers in the Bay Area and attended the fourth Singularity Summit.
Over the next few years, the author engaged deeply with the growing AI risk community while major AI companies were just forming—DeepMind was founded in 2010, and MIRI (then called the Singularity Institute) was already active. In 2011, they began a philosophy PhD at CMU hoping to work at the Future of Humanity Institute. By 2013, they were working at MIRI, measuring algorithmic progress across computer science domains to inform expectations of future AI. This timeline shows that serious AI safety concerns existed long before profit motives could have invented them.
- Author contacted Eliezer Yudkowsky in 2008 about AI risk, before any major AI companies existed
- DeepMind was founded in 2010, while the AI safety community had already been active for years
- Author worked at MIRI in 2013 measuring algorithmic progress to inform AI risk expectations
Why It Matters
Proves that AI existential risk concerns are a genuine, decades-old research field, not corporate hype.