We've been watching for a god like AI super-brain. Research says that was never how intelligence scaled ...
The Singularity is dead. AI scales through social networks, not oracles.
For decades, the dominant story in AI has been the Singularity: one god-like superintelligence bootstrapping itself to incomprehensible power, rendering humans irrelevant. A new paper from Google’s Paradigms of Intelligence team, published in *Science*, argues this frame is almost certainly incorrect. The authors point to evolutionary and historical evidence: every major explosion in intelligence—primate cognition, language, writing, institutions—has been social, not individual. Primate intelligence scaled with group size, not habitat difficulty. Language created a “cultural ratchet” that accumulates knowledge across generations. Writing and institutions externalize collective intelligence into systems that outlast any single participant. AI, they argue, is the next step in that sequence, not a break from it.
What makes the paper genuinely surprising is evidence from inside current models. Reasoning models like DeepSeek-R1 don't improve by “thinking longer.” Instead, they spontaneously generate internal multi-agent debates: distinct cognitive perspectives that argue, question, verify, and reconcile. No one trained them to do this—it emerged purely from optimization pressure rewarding accuracy. Intelligence, it turns out, defaults to social even inside a single mind. If the researchers are right, the path to more powerful AI does not run through building a bigger oracle. It runs through building richer social systems, and governing them the way we govern cities and institutions—not with a kill switch. This reframes alignment entirely.
- Google’s Paradigms of Intelligence team published a *Science* paper arguing intelligence scales socially, not individually. The Singularity frame is wrong.
- Historical evidence: primate intelligence scaled with group size; language enabled the 'cultural ratchet'; writing externalized collective intelligence.
- DeepSeek-R1 and similar models spontaneously generate internal multi-agent debates—a social structure emerged purely from accuracy optimization, no explicit training.
Why It Matters
Alignment shifts from controlling a single oracle to governing social systems of AIs—far harder but closer to how human intelligence actually works.