AI Safety

No, You Don't Need Self-Locating Evidence.

Ape in the Coat's viral post dismantles a core concept in anthropic probability theory with basic probability 101.

Deep Dive

In a viral post on the rationality forum LessWrong, author Ape in the Coat launched a direct critique against a foundational concept in philosophical probability theory. The article, titled 'No, You Don't Need Self-Locating Evidence,' specifically rebuts an earlier post by user Bentham's Bulldog, who argued that self-locating probabilities—concerning one's place in the world—are necessary. Ape in the Coat contends that the entire field of 'anthropic reasoning' has been led astray by adopting confused frameworks, resulting in unnecessary paradoxes and ridiculous choices between flawed options.

The core of the argument rests on a return to basic probability theory, framed as 'Probability theory 101.' The author defines probability in classic terms of sample spaces and events from repeated trials, using the simple example of a fair die roll. They demonstrate how updating beliefs with new evidence (like learning the roll was even) is handled perfectly by standard conditional probability, without any need for extra philosophical machinery. The post asserts that philosophers created unnecessary complexity by shifting discussion from outcomes of probability experiments to 'possible worlds' and then 'centered possible worlds,' which the author sees as the root of the confusion.

Ultimately, the piece is presented as a partial venting of frustration and a corrective to what the author sees as persistent errors in a niche but influential domain. While promising a more comprehensive historical analysis in the future, this post aims to resolve specific confusions by reaffirming that traditional probability maps are sufficient for the territory, making specialized concepts like 'self-locating evidence' superfluous and misleading.

Key Points
  • Targets Bentham's Bulldog's argument that 'self-locating probabilities' are necessary for reasoning about one's place in the world.
  • Asserts that standard probability theory (sample spaces, events, conditional probability) is sufficient, making specialized philosophical constructs unnecessary.
  • Claims the field of anthropic reasoning is built on confused frameworks, leading to persistent paradoxes like the Sleeping Beauty problem.

Why It Matters

Challenges foundational assumptions in AI safety and decision theory, where anthropic reasoning informs predictions about existential risk and AI behavior.