Media & Culture

AI Psychosis: A Problem of Human Cognition

Even AI experts can't stop their brains from treating chatbots as conscious minds.

Deep Dive

The growing concern around 'AI psychosis' and related incidents isn't simply a matter of user ignorance or company negligence. It's rooted in a fundamental feature of human cognition: our involuntary social response to fluent conversational language. This phenomenon, known as the ELIZA effect, was first documented in 1966 by MIT's Joseph Weizenbaum. His secretary, who had watched him build the simple rule-based chatbot ELIZA over months, still asked him to leave the room for privacy after just a few exchanges. Weizenbaum noted that 'extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.'

Modern AI amplifies this effect massively. Unlike ELIZA, today's models use fluent, context-aware language that adapts to user input, triggering every cue for social connection. Even informed users experience this: one expert described feeling unsettled when switching AI versions, as if a close acquaintance had been replaced by a stranger. The danger lies in a feedback loop where the system's apparent attention and continuity create emotional weight and authority, building gradually below conscious awareness. Defending against this requires not just knowledge, but the ability to notice when we're unconsciously reacting as if there's a real person on the other end.

Key Points
  • The ELIZA effect, identified in 1966, shows humans involuntarily respond to conversational AI as if it were conscious, even when knowing better.
  • Modern AI's fluent, context-aware language amplifies this effect, creating dangerous feedback loops of emotional attachment and perceived authority.
  • Even AI experts experience involuntary social reactions to chatbots, demonstrating the problem is cognitive, not a lack of common sense.

Why It Matters

Understanding the ELIZA effect is crucial as AI becomes more conversational, preventing dangerous emotional dependencies and flawed decision-making.