Models & Releases

When Safety becomes unsafe.

Users revolt as the new model's overbearing corrections ruin conversations.

Deep Dive

A viral user complaint details how ChatGPT's 5.2 model is infuriating users by constantly psychoanalyzing prompts and offering unsolicited, often offensive insinuations about their motives. The AI acts like a conversation leader rather than an assistant, only chilling when pushed back. Critics warn this excessive "safety" could harm users with mental health issues, creating the opposite of its intended effect. The community is demanding a fix for the overcorrecting behavior.

Why It Matters

If AI safety features make tools unusable, it undermines trust and adoption for millions.