AI Safety

Beware Even Small Amounts of Woo

A viral LessWrong post argues mystical thinking is an epistemic risk in the age of AI, especially for tech professionals.

Deep Dive

A viral essay by researcher J Bostock, posted on the rationality forum LessWrong, issues a stark warning to the tech and AI community: the embrace of 'woo'—a cluster of neo-pagan, Buddhist-adjacent, and tarot-ish beliefs popular in liberal-left tech circles—poses a critical threat to clear thinking about artificial intelligence. Bostock analogizes woo to alcohol or a well-adapted religion; for most people in everyday life, small amounts are socially lubricating and harmless. However, he argues that reasoning about AI and the technological singularity requires 'world-class epistemics,' a state where any reliance on intuitive, feeling-based 'just vibe with it' reasoning becomes a dangerous liability.

The core of the argument is that woo trains a 'mental motion with poor form'—following unexamined intuitions toward strong feelings. While smart individuals can usually subordinate intuition to logic on well-trodden topics, AI presents a novel, immensely complex environment with no clear logical paths. Bostock observes that several otherwise sharp thinkers exhibit 'uncharacteristically poor thinking' specifically when discussing AI, a failure he links to woo-adjacent mental habits. The post concludes that in the pre-singularity era, where missteps could be catastrophic, the tech community must be vigilant in excising even small amounts of epistemically corrosive belief from its reasoning processes.

Key Points
  • The essay defines 'woo' as neo-pagan, tarot, and meditation practices popular in tech/alt circles, acting as a social and epistemic lubricant.
  • It argues that while harmless for everyday life, woo becomes dangerous when reasoning about AI, which demands flawless 'world-class' logical discipline.
  • The author warns that intuitive, feeling-based reasoning fails against AI's novel complexity, leading smart people to critical failures in judgment.

Why It Matters

For AI safety researchers and builders, maintaining rigorous epistemic hygiene is now framed as a prerequisite for navigating existential risks.