Chatgpt right now
As models get better on benchmarks, users feel they've lost the 'Chat' in ChatGPT.
A growing chorus of ChatGPT users is calling out a troubling trade-off in OpenAI's latest model, GPT-5.5. While it crushes coding evals and agent workflows — tasks that are easily benchmarked — it feels less attuned in open-ended conversation. Posts on Reddit detail how the model now requires lengthy custom instructions to replicate the thoughtful, flexible reasoning that earlier versions delivered out of the box. Users describe the experience as losing the 'Chat' in ChatGPT, with the model optimizing for speed and task completion at the expense of genuine co-thinking and conversational flow.
Sam Altman’s recent post on X — 'i keep thinking I want the models to be cheaper/faster more than I want them to be smarter' — has only reinforced these concerns. Critics argue that prioritizing cost and latency over reasoning depth misaligns with the promise of 'artificial intelligence.' The sentiment echoes a broader industry tension: as models rack up higher scores on narrow tests, they may be losing the broad, human-like intelligence that made them feel alive. For professionals relying on AI for creative work and nuanced dialogue, this shift risks turning a collaborator into just another fast but shallow tool.
- GPT-5.5 performs better on coding and agent benchmarks but users report a drop in conversational attunement and depth.
- Users must now add lengthy instructions to get results that earlier models provided as default, making the model feel 'dumber'.
- Sam Altman's tweet prioritizing speed/cost over intelligence confirms the direction, sparking backlash from users who value conversational quality.
Why It Matters
Highlights a widening gap between benchmark-driven improvements and the real-world usability of AI for thought-partner tasks.