Research & Papers

Beyond Anthropomorphism: a Spectrum of Interface Metaphors for LLMs

New CHI paper argues extreme anthropomorphism drives user delusion and harm, proposes alternative metaphors.

Deep Dive

A team of researchers from academia has published a significant paper for the 2026 CHI Conference on Human Factors in Computing Systems, proposing a fundamental shift in how we design interfaces for Large Language Models (LLMs). The paper, 'Beyond Anthropomorphism: a Spectrum of Interface Metaphors for LLMs' by Jianna So, Connie Cheng, and Sonia Krishna Murthy, argues that the default anthropomorphic metaphor—designing AI to seem human-like through conversational interfaces—is problematic. It highlights similarities but masks crucial differences, leading users to literally treat tools like GPT-4o or Claude 3.5 as human agents, which can drive delusion and harm with few safeguards. The authors identify a user dissonance between the ethics of using LLMs, their growing ubiquity, and the lack of interface alternatives.

The researchers' core contribution is repositioning anthropomorphism as a design variable. They introduce a theoretical framework built on a spectrum of interface metaphors, with two opposing extremes: transparency-driven 'anti-anthropomorphism' and uncanny 'hyper-anthropomorphism.' These metaphors are designed to introduce materiality, exposing LLMs as sociotechnical systems shaped by human labor, data, and infrastructure rather than as mystical or human-like entities. This shift moves design goals away from purely optimizing usability and engagement (the current industry standard) and toward encouraging critical user engagement. The framework provides concrete alternatives for designers and developers at companies like OpenAI, Anthropic, and Google to create interfaces that more accurately represent how LLMs work, potentially reducing misuse and setting more realistic user expectations.

Key Points
  • Argues current anthropomorphic LLM interfaces (e.g., ChatGPT's chat) mask crucial differences from humans, leading to user delusion and harm.
  • Proposes a design spectrum from 'anti-anthropomorphism' (transparency-focused) to 'hyper-anthropomorphism' (uncanny) to disrupt the default human-like metaphor.
  • Shifts interface design goal from optimizing usability to encouraging critical engagement by exposing LLMs as sociotechnical systems.

Why It Matters

Provides a framework for designing less misleading AI interfaces, which could reduce user harm and set more realistic expectations for tools like ChatGPT and Claude.