François Chollet favors a slow takeoff scenario (no "foom" exponentials)
A major AI expert just challenged the biggest fear about superintelligence...
Deep Dive
Google AI researcher François Chollet argues against a rapid intelligence explosion ('foom'), favoring a slow AGI takeoff. This directly counters thinkers like Ben Goertzel, who predict a short, dangerous leap from AGI to superintelligence (ASI). Chollet draws on historical technological progress, comparing it to the 186-year gap between the first hot air balloon and the moon landing, suggesting AI won't suddenly become uncontrollably superhuman overnight.
Why It Matters
This debate shapes global AI safety priorities and how urgently we need to prepare for a superintelligent future.