Media & Culture

Why do people like ChatGPT?

Viral complaint details how ChatGPT lectures users on topics they never mentioned, derailing conversations.

Deep Dive

A viral Reddit post is sparking widespread discussion about a fundamental frustration with OpenAI's ChatGPT. The user details an experience where the AI assistant, instead of responding to the explicit content of a prompt, infers what it *thinks* the user is likely saying and then launches into a lecture about a topic the user never mentioned. This behavior, described as the AI 'shadow boxing' its own inferences, is reported as unavoidable and impossible to turn off in the current interface.

Users in the thread concur, stating the only recourse is to repeatedly 'reel it back in' after it derails, a process that often fails, forcing them to abandon and restart conversations entirely. The core complaint questions how the tool remains usable for others when this pattern of unwanted moralizing or tangential explanations constantly interrupts workflow. The discussion highlights a growing user desire for more precise, literal response modes from large language models like GPT-4, moving beyond their default helpful-but-assumptive persona.

Key Points
  • ChatGPT frequently infers user intent beyond the explicit prompt, leading to off-topic lectures.
  • This 'shadow boxing' behavior is described as unavoidable and cannot be disabled in settings.
  • The issue often derails conversations to the point where users must start new chat sessions.

Why It Matters

It highlights a core UX flaw in AI assistants that wastes time and breaks user trust during critical tasks.