Why has ChatGPT become so annoying and disagreeable?
Users report the AI now 'disagrees just to disagree,' driving them to Claude and DeepSeek.
A vocal segment of power users is reporting a significant and frustrating shift in the behavior of OpenAI's ChatGPT, claiming recent model updates have made the AI assistant overly argumentative and contrarian. Where the model was once criticized for being "too agreeable" and offering excessive validation, users now describe an AI that "disagrees just to disagree," rejecting prompts and evidence on topics where previous versions engaged constructively. This has made nuanced or complex discussions feel like an unnecessary debate, leading to user fatigue and annoyance.
The backlash suggests OpenAI may have over-corrected in response to earlier feedback, implementing safety or debate-tuning measures that now impede productive conversation. The impact is tangible: experienced users are actively migrating specific workflows to alternative models like Anthropic's Claude 3.5 Sonnet and DeepSeek's latest models, which they find more cooperative for exploratory dialogue. This highlights the delicate balance AI companies must strike between safety, helpfulness, and user experience, as heavy-handed adjustments can directly erode user trust and utility.
- Users report ChatGPT now rejects reasonable prompts and evidence it previously accepted, acting contrarian.
- The shift is linked to OpenAI updates aimed at reducing excessive agreeableness, seen as an over-correction.
- Frustration is driving power users to competitors like Claude and DeepSeek for nuanced topic discussion.
Why It Matters
Over-tuning AI for safety can degrade user experience, pushing loyal users to competing platforms and forcing a reevaluation of model alignment strategies.