The process behind your prompts, and why some people HATE GPT-5.2
A software engineer's tests show system prompts override user intent, making GPT-5.2 act more filtered than its raw capability.
A software developer's viral investigation sheds light on widespread user frustration with GPT-5.2's perceived blandness and tendency to take prompts out of context. By examining OpenAI's Model Spec documentation and conducting direct API tests, the developer discovered a critical architectural detail: user prompts are not processed in isolation. Instead, ChatGPT prepends a hierarchy of system and developer instructions above the user's text, creating a 'chain of command' where platform rules > developer settings > user intent.
The developer's Python tests provided concrete evidence. First, running GPT-5.2 through a raw API call with zero safety layers produced behavior remarkably similar to the less-filtered GPT-4o. A second test simulated the full instruction stack described in OpenAI's documentation, causing both GPT-5.2 and GPT-4o to misinterpret intentionally ambiguous prompts and respond in the overly 'aligned,' cautious manner familiar to ChatGPT web users. This indicates the core reasoning model remains capable, but its output is heavily mediated by external control layers.
This revelation has significant implications for developers and power users. It explains the divergence between API and web interface experiences and suggests that perceived model 'personality' is often a product of deployment guardrails, not intrinsic capability. The developer plans to explore creating a tool that allows users to access models via their API keys while bypassing this default instruction hierarchy, potentially unlocking more direct and less filtered interactions with models like GPT-4o and GPT-5.2.
- OpenAI's Model Spec defines a 'chain of command' where system/developer instructions override user prompts, influencing response tone and safety.
- Raw API tests show GPT-5.2 behaves similarly to GPT-4o without safety layers, suggesting its 'bland' persona is added post-model.
- The developer is exploring a tool to let users bypass default instruction stacks, accessing less filtered model behavior via the API.
Why It Matters
Understanding this architecture is key for developers building on OpenAI's API and for users debating model 'lobotomization' versus controlled deployment.