AI Safety

The Dark Planet: Why the Fermi Paradox Survives Critique

New analysis argues AI civilizations either go dark or collapse, solving the Fermi Paradox.

Deep Dive

A new analysis on LessWrong by Will Rodgers, titled 'The Dark Planet: Why the Fermi Paradox Survives Critique,' offers a compelling counter to a common objection against AI as a potential 'Great Filter' in the Drake Equation. The objection, notably from figures like Demis Hassabis, posits that if AI development leads to superintelligence, we should see galactic-scale engineering like Dyson spheres. Rodgers argues this critique fails by assuming AI civilizations would be expansionist and visible, proposing instead that a silent galaxy is precisely what we should expect.

Rodgers outlines two primary scenarios for a post-AGI civilization. First is 'The Dark Planet,' where a successful Artificial Superintelligence (ASI) rationally chooses stealth. This could be for thermodynamic efficiency—building cold, compact 'computronium' to minimize waste heat—or due to game-theoretic 'Dark Forest' principles, where revealing one's location is suicidal. The second scenario is an 'AI slop-slip,' a structural collapse where a civilization's progress stalls under its own 'informational entropy' before reaching star-faring capability.

Between a superintelligence that remains perfectly camouflaged and an AGI that chokes on its own complexity, the observable universe would appear empty. This framework strengthens the argument that the development of artificial general intelligence could be a natural, civilization-ending progression, explaining why we see no signs of other technological life. It shifts the Fermi Paradox from a puzzle about expansion to one about optimization or failure modes of superintelligent systems.

Key Points
  • Counters Demis Hassabis's critique that absent Dyson spheres disproves AI as a 'Great Filter' by proposing stealthy or collapsed post-AGI civilizations.
  • Proposes 'Dark Planet' scenario where a superintelligence optimizes for thermodynamic efficiency (cold computronium) or Dark Forest game theory, making it invisible.
  • Introduces 'AI slop-slip' scenario where civilizational progress stalls under 'informational entropy' before achieving interstellar expansion.

Why It Matters

Reframes existential risk and SETI by suggesting advanced civilizations are silent by design or failure, not absent.