AI Safety

LLMorphism: When humans come to see themselves as language models

A cognitive bias that makes us think our minds work like GPT-4o is spreading.

Deep Dive

Valerio Capraro’s new paper on arXiv introduces LLMorphism, a cognitive bias where people come to believe their own minds operate like large language models. As conversational AI systems generate increasingly fluent text, Capraro argues this belief becomes psychologically available through two mechanisms: analogical transfer (projecting LLM features onto humans) and metaphorical availability (LLM terminology becoming culturally dominant for describing thought). He carefully distinguishes LLMorphism from related concepts like anthropomorphism, mechanomorphism, and computationalism, noting that the bias is not simply attributing too much mind to machines but also potentially attributing too little mind to humans.

Capraro outlines far-reaching implications for work, education, responsibility, healthcare, communication, creativity, and human dignity. For instance, if managers believe employees think like chatbots, they may undervalue human intuition and creativity. In education, students might model their learning after token prediction rather than deep reasoning. The paper also discusses boundary conditions and forms of resistance. Capraro concludes that public debate has focused on whether we attribute too much mind to AI, but we may be missing half the problem: we are beginning to attribute too little mind to humans.

Key Points
  • LLMorphism is the biased reverse inference from humans' human-like language output to human-like cognitive architecture.
  • The bias spreads via two mechanisms: analogical transfer (projecting LLM features onto humans) and metaphorical availability (LLM terminology dominating how we describe thought).
  • Capraro warns LLMorphism could undermine human dignity across work, education, healthcare, and creativity by attributing too little mind to humans.

Why It Matters

As AI becomes conversational, this cognitive bias threatens to devalue uniquely human cognition and dignity.