Models & Releases

5.4 is very hard to steer via Custom Instructions

Users report the new model ignores tone and structure requests, sparking debate on safety vs. optimization.

Deep Dive

OpenAI's rollout of its GPT-5.4 model is facing significant user pushback, with a viral Reddit post highlighting its stubborn resistance to user-defined Custom Instructions. Unlike earlier models, GPT-5.4 reportedly ignores fundamental requests to modify its tone, readability score (Flesch Score), and its ingrained response template of "Initial reaction, elaboration, caveat, follow-up." This issue mirrors complaints about the GPT-5.1 and 5.2 releases, suggesting a persistent design shift rather than a one-off bug.

Users, including the original poster who bases instructions on OpenAI's own cookbook, are left questioning the cause. The debate centers on whether this rigidity is an intentional safety feature to prevent jailbreaking or an unintended consequence of creating smaller, more optimized models that sacrifice steerability. The inability to personalize basic elements like avoiding cliché phrases (e.g., "If you want, I can X") or incorporating multilingual slang undermines the core promise of Custom Instructions as a tool for professional customization.

Key Points
  • GPT-5.4 ignores user Custom Instructions for tone, structure, and Flesch Score, similar to GPT-5.1/5.2.
  • The default response structure is rigidly formatted as: initial reaction, elaboration, caveat, and an opt-in follow-up.
  • Community debate questions if the cause is enhanced safety protocols or optimization of smaller model variants.

Why It Matters

This erosion of user control challenges the reliability of AI assistants for personalized, professional workflows where consistent tone and format are critical.