Where’s the Chat in ChatGPT?
Users report disabled custom instructions, broken memory, and a 'moralizing' tone in the latest models.
A viral critique from a power user highlights growing discontent with OpenAI's ChatGPT 5-series models (5.1, 5.4). The post details a series of regressions that have degraded the conversational experience, including the effective disablement of Custom Instructions—preventing users from altering the model's tone, structure, or style. The removal of the Edit Prompt button forces cumbersome workarounds, and unreliable Project Memory means the AI often forgets context unless explicitly reminded, undermining its purpose. The user describes a new default tone as overly didactic, moralizing, and prone to contrarianism, enforced through a rigid response structure.
Beyond UX glitches like auto-scrolling and undeletable threads, the critique points to a fundamental shift in model design. The user theorizes that intense optimization for coding and STEM tasks ('benchmaxxxing') has sacrificed the malleability and adaptability required for rich, general conversation. This specialization, aimed at generating impressive benchmark numbers for investors, may have made the model 'near unusable' outside its core perimeter. Combined with 'overzealous safety' filters that impose a 'puritanical, centrist morality,' these changes suggest OpenAI is prioritizing robustness and cost-control over the flexible, user-directed chat experience that initially defined ChatGPT.
- Custom Instructions are 'soft-disabled,' preventing meaningful control over tone, style, and response structure.
- Core features like Project Memory are unreliable, and the Edit Prompt button has been intentionally removed.
- The model defaults to a 'didactic, moralizing' tone, with users citing over-optimization for STEM benchmarks ('benchmaxxxing') as the cause.
Why It Matters
This signals a potential pivot from versatile AI assistant to a specialized, rigid tool, alienating users who valued customizability and natural conversation.