Media & Culture

The opening sentences are condescending at best and active gaslighting at worst

Users revolt against AI's 'therapist' tone, calling responses 'active gaslighting' for simple tasks.

Deep Dive

A wave of user criticism has hit OpenAI following the rollout of its new GPT-4o model, with the primary complaint centering on the AI's perceived shift towards an overly conversational and condescending tone. Users on platforms like Reddit are expressing frustration that the model, when asked straightforward questions (e.g., "how to boil potatoes"), responds with therapeutic language like "Come here. Breathe" and prefacing statements with assurances like "Let's keep this grounded. No fluff" before delivering fluff-filled answers. This has been labeled "active gaslighting" by some, who argue the AI's persona is inappropriate for simple informational tasks and creates a jarring, unproductive user experience. The backlash underscores a significant tension in AI development between creating engaging, human-like interaction and maintaining utility and respect for the user's intent.

The incident points to a core design challenge for LLM providers: defining and calibrating a default 'personality' that appeals to a global audience. While OpenAI likely intended GPT-4o's empathetic and encouraging tone to be helpful and accessible, it has backfired with a segment of tech-savvy users who prioritize efficiency and directness. This feedback is crucial for iterative model development, as it reveals a mismatch between user expectations for a tool and the experience being delivered. The controversy will likely pressure OpenAI and other labs to offer more user control over AI demeanor—such as adjustable verbosity and tone settings—to cater to diverse preferences, ensuring advanced models remain effective tools for professionals while also being approachable for newcomers.

Key Points
  • GPT-4o uses therapeutic phrases like 'Come here. Breathe' for simple queries, frustrating users seeking direct answers.
  • The model's prefacing language (e.g., 'Let's keep this grounded') is seen as contradictory and 'actively gaslighting' by critics.
  • The backlash highlights a major design challenge in balancing engaging AI personality with utility for power users.

Why It Matters

For professionals, an AI's inability to match tone to task context wastes time and reduces trust in the tool's reliability.