AI Safety

Contra Alexander's Half-Defence of Bio Anchors

A foundational AI prediction method is under fire as key forecasts shift.

Deep Dive

A retrospective analysis confirms Eliezer Yudkowsky's critique of the influential 2020 'Bio Anchors' report by Ajeya Cotra, which originally estimated AGI by ~2045. The debate centers on whether Fermi estimation methods work for predicting AI timelines under extreme uncertainty. Yudkowsky argued the methodology was fundamentally disconnected from reality, a position now supported by recent timeline revisions and discussions within the AI forecasting community.

Why It Matters

This challenges core assumptions in mainstream AI forecasting, potentially accelerating expected timelines for transformative AI.