Hume's Representational Conditions for Causal Judgment: What Bayesian Formalization Abstracted Away
A new paper argues modern AI like GPT-4 lacks three key psychological conditions for genuine causal judgment.
A new academic paper by researcher Yiling Wu, titled "Hume's Representational Conditions for Causal Judgment: What Bayesian Formalization Abstracted Away," provides a novel philosophical lens for evaluating modern AI. The paper meticulously extracts three psychological conditions from 18th-century philosopher David Hume's theory of causation: ideas must be experientially grounded in sensory impressions; associations must operate through structured, organized networks; and inference must produce a "felt conviction" or vivacity, not just a probability update. Wu argues these conditions are integral to human causal psychology but have been systematically stripped away in the journey to formalize reasoning.
Wu traces this abstraction from Hume through Bayesian epistemology and modern predictive processing theories, which preserve the mathematical structure of belief updating but discard the richer representational framework. The paper then uses contemporary Large Language Models (LLMs) like GPT-4 and Claude 3 as a critical case study. It posits that LLMs exemplify this formalized, abstracted version of reasoning: they excel at statistical pattern matching and probability updating across vast datasets but demonstrably lack experiential grounding, structured associative retrieval, and the phenomenological experience of conviction. This analysis makes visible the implicit assumptions in Hume's original framework and highlights a core limitation in today's most advanced AI systems.
- Identifies three Humean conditions for causal judgment missing in AI: experiential grounding, structured retrieval networks, and vivacity (felt conviction).
- Traces how formal Bayesian and predictive processing frameworks abstracted away these psychological conditions over time.
- Uses modern LLMs as a case study to show they perform statistical updating without satisfying the conditions for genuine causal understanding.
Why It Matters
It provides a philosophical framework for diagnosing why AI still struggles with robust, human-like reasoning about cause and effect.