Media & Culture

Big tech still believe LLM will lead to AGI?

A bombshell paper suggests scaling AI might make its failures more chaotic.

Deep Dive

A new research paper analyzing frontier AI models reveals a critical flaw: as models get larger and more capable, their failures become more incoherent and unpredictable, not less. The study measures this 'incoherence' across tasks, finding it often increases with model scale. This suggests simply building bigger models won't eliminate erratic behavior, predicting a future where AI failures are more random accidents than consistent, goal-directed misalignment.

Why It Matters

This challenges core assumptions in AI development, suggesting scaling alone may increase unpredictable risks, not solve alignment.