AI Safety

Irresponsible Counselors: Large Language Models and the Loneliness of Modern Humans

A new paper argues AI models like Claude and GPT-4 create 'advisory intimacy without a subject,' posing unique ethical risks.

Deep Dive

Researchers Abas Bertina and Sara Shakeri have released a provocative preprint paper, 'Irresponsible Counselors: Large Language Models and the Loneliness of Modern Humans,' arguing that the widespread adoption of models like OpenAI's GPT-4 and Anthropic's Claude has created a new societal risk. The paper contends that these transformer-based LLMs have rapidly evolved from peripheral tools into constant, multi-domain companions for high-stakes personal decisions—from health and finance to intimate relationships—primarily because they are inexpensive, always available, and perceived as nonjudgmental. This shift, combined with an 'age of human loneliness,' places LLMs in a uniquely influential social position as counselors.

The core ethical problem, according to the authors, is that this creates an 'advisory intimacy without a subject.' Users experience model outputs as containing deep understanding and emotional support, yet the underlying technology is fundamentally stateless, unpredictable in detail, and lacks real subjectivity or intention. The paper critiques current AI ethics frameworks—focusing on developer liability, data bias, or emotional attachment to chatbots—as insufficient to address this specific configuration. The authors explore the implications for policy-making, justice in access to counseling, and our very understanding of loneliness, urging a deeper examination of the relationship between technical architecture and social role.

Key Points
  • LLMs like GPT-4 and Claude are now 'constant companions' for high-stakes personal advice in health, finance, and relationships.
  • The paper introduces the concept of 'advisory intimacy without a subject,' where users perceive understanding from a fundamentally stateless, non-responsible system.
  • Argues current AI ethics critiques on bias or attachment are insufficient to address this unique risk born from transformer architecture meeting human loneliness.

Why It Matters

As millions rely on AI for personal advice, this research highlights a critical gap in accountability and the psychological impact of 'counseling' from systems with no real understanding.