Models & Releases

Long time supporter and subscriber

Longtime subscriber cancels over model's alleged failure to grasp complex prompts, even with manicured inputs.

Deep Dive

A significant backlash is brewing among power users of OpenAI's ChatGPT, highlighted by a viral Reddit post from a self-described 'long time supporter and subscriber.' The user, who once joked they would pay $100/month for the service, has canceled their subscription, declaring the AI 'beyond useless.' Their core complaint targets the 'GPT-4o 5.4 Thinking Extended' model, alleging it now prioritizes conserving computational tokens—a key cost factor for OpenAI—over engaging in the extended, complex reasoning the feature's name implies. They report the model cuts off its 'thinking' process after mere seconds, failing to deliver on the promise of deeper analysis.

This critique points to a potential tension between product performance and operational economics. The user contrasts this with the older GPT-4o model, which they claim could successfully parse 'complex, messy stream of consciousness prompts' without a dedicated reasoning mode. The frustration is compounded by the claim that even carefully crafted ('manicured') prompts now fail with the most advanced reasoning models. This sentiment, resonating widely online, suggests a perceived decline in utility for demanding tasks, challenging the value proposition of premium AI subscriptions for technical professionals and power users who rely on nuanced, iterative reasoning.

Key Points
  • User cancels ChatGPT Plus, claiming 'Thinking Extended' mode cuts reasoning short to save OpenAI compute costs.
  • Contends older GPT-4o handled complex, messy prompts better than new 'advanced reasoning' models like GPT-4o 5.4.
  • Highlights a growing user concern over the trade-off between AI model performance and provider cost-optimization.

Why It Matters

If true, it signals a quality-cost trade-off that could degrade the tool's value for professionals relying on deep AI analysis.