Cognitive Dark Matter: Measuring What AI Misses
New paper argues AI's missing training signal for functions like metacognition and emotional intelligence creates 'jagged' capabilities.
A team of researchers led by Patrick Mineault, Thomas Griffiths, and Sean Escola has published a provocative paper introducing the concept of 'Cognitive Dark Matter' (CDM). They argue that the 'jagged intelligence landscape' of modern AI systems like GPT-4 and Claude 3 stems from a fundamental missing training signal: brain functions that shape behavior but are difficult to infer from behavior alone. The paper identifies seven key domains where this CDM exists—including metacognition, cognitive flexibility, episodic memory, lifelong learning, abductive reasoning, and social/emotional intelligence—and contends that current AI benchmarks and large-scale neuroscience datasets are heavily skewed toward already-mastered capabilities, leaving these critical functions largely unmeasured.
To address this gap, the authors outline a three-pronged research program designed to surface CDM for model training. The proposed data types include (i) latent variables from large-scale cognitive models, (ii) process-tracing data like eye-tracking and think-aloud protocols, and (iii) paired neural-behavioral data. The core thesis is that training AI on cognitive processes rather than just behavioral outcomes will produce models with more general, less 'jagged' intelligence. As a significant dual benefit, collecting and utilizing this data would also advance our fundamental understanding of human intelligence itself, creating a virtuous cycle between neuroscience and AI development.
- Identifies 7 key 'Cognitive Dark Matter' domains current AI misses: metacognition, cognitive flexibility, episodic memory, lifelong learning, abductive reasoning, and social/emotional intelligence.
- Proposes 3 new data types for training: latent cognitive variables, process-tracing data (eye-tracking), and paired neural-behavioral data.
- Aims to fix AI's 'jagged intelligence' by training on cognitive processes, not just outcomes, for more general capabilities.
Why It Matters
Provides a roadmap to move beyond narrow benchmarks and build AI with more human-like, general, and robust reasoning abilities.