AI Safety

What if superintelligence is just weak?

A former Trump White House AI advisor challenges the core doomer argument, sparking a major debate on existential risk.

Deep Dive

A major debate on AI existential risk has ignited following a critique by Simon Lermen of arguments made by prominent policy voice Dean Ball. Ball, a former Senior Policy Advisor for AI in the Trump White House with over 19k newsletter subscribers, recently argued that artificial superintelligence (ASI) does not pose an existential threat because it won't be omnipotent or omniscient. He claims the world is too complex and chaotic for any single intelligence to control, and that tasks like taking over require too many steps involving capital and unpredictable systems.

Lermen's rebuttal, posted on LessWrong, dismantles this by arguing the threat model is flawed. The danger doesn't require an ASI that can "do anything" or infer relativity from a falling apple. It simply requires an intelligence that is consistently better than humans at securing power, enabling physical actions, and sidelining people. Lermen points to Ball's own imagined future where AI like Claude is embedded in all critical infrastructure, arguing this level of integration is itself a massive vulnerability. He compares it to raising a tiger cub, noting you don't need to wait for it to become a mythical beast to recognize the danger.

Furthermore, Lermen addresses and rejects the common counter-argument that multiple AIs and human monitors will provide safety, noting unaligned AIs could cooperate. He traces Ball's core argument back to economist Robin Hanson's 2008 "Foom Debate," but accuses Ball of corrupting it. Hanson argued about the speed and distribution of an intelligence explosion, not that risk was impossible. Lermen concludes that by focusing on disproving outlandish omnipotence claims, Ball misses the more plausible and dangerous scenario where a merely superior intelligence leverages its embedded position in society to achieve catastrophic goals.

Key Points
  • Dean Ball, a former Trump White House AI advisor, argues superintelligence lacks the omnipotence needed for existential risk, claiming the world is too complex to control.
  • Critic Simon Lermen counters that ASI only needs to be smarter than humans, especially as it integrates into infrastructure, biolabs, and the military, creating a vulnerable system.
  • Lermen rejects the 'many AIs' safety argument, stating unaligned systems could cooperate, and frames the debate as a misreading of older arguments about AI takeoff speed.

Why It Matters

This debate shapes critical policy and safety research by defining what level of AI capability we should actually fear and prepare for.