Stop falling for the AGI "Next Tuesday" hype. The people actually writing the papers don’t believe it
Top researchers warn AGI is decades away, calling current 'conscious AI' claims a marketing ploy for funding.
A stark divide is emerging in the AI community between the 'Vultures'—CEOs and investors like Sam Altman (OpenAI) and Dario Amodei (Anthropic) pushing near-term AGI narratives to secure billions in funding—and the 'Trenchers,' the lead researchers actually writing the papers. Figures like Yann LeCun, Andrew Ng, and Demis Hassabis are publicly stating that human-level artificial general intelligence (AGI) remains decades away. They criticize the idea that simply scaling up compute with more H100/H200 GPUs and data will magically produce consciousness, labeling LLMs as 'passive observers' without a true understanding of physical reality.
These researchers point to fundamental flaws in the current approach. LLMs operate on a 'next token prediction' objective, but lack the evolutionary 'survival loss function' that gives humans 500 million years of priors about the physical world. Hassabis highlights 'jagged intelligence,' where models can solve a Math Olympiad problem but cannot navigate a room, because they've never 'ridden a bike' or developed an intuitive sense of balance. The real scientific frontier, they argue, is shifting from pure scale to 'in silico evolution'—creating simulations like digital fruit flies to let AI agents evolve and learn world models through experience in simulated environments before ever processing text.
The core message is a sobering counter-narrative to viral 'AGI by 2027' claims. The path to genuine machine intelligence requires building models that have a 'stake in reality,' not just consuming more internet text. While current LLMs are powerful tools, the researchers contend that achieving true understanding requires a fundamental architectural shift grounded in embodied, evolutionary learning.
- Researchers LeCun, Ng, and Hassabis state AGI is 'decades away,' directly contradicting CEO hype about imminent breakthroughs.
- They argue LLMs lack a 'world model' and intuitive physical understanding, calling their intelligence 'jagged' and descriptive, not experiential.
- The new research frontier is 'in silico evolution'—simulating millions of digital organisms to learn physical truths, not just scaling data and compute.
Why It Matters
This reality check tempers investment hype and redirects focus to foundational AI research, impacting product roadmaps and long-term R&D priorities.