OpenAI Codex system prompt includes explicit directive to "never talk about goblins"
Codex CLI reveals a bizarre double warning about goblins and creatures.
OpenAI's latest open-source release of Codex CLI on GitHub has revealed a peculiar system prompt for GPT-5.5 that explicitly instructs the model to "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query." The prohibition appears twice in the 3,500-word "base instructions" file, alongside more standard reminders about avoiding emojis and destructive git commands. This directive is absent from earlier model prompts in the same JSON file, suggesting it's a recent addition to combat a newly observed behavior where GPT-5.5 spontaneously mentions goblins in unrelated conversations. Anecdotal evidence on social media supports this, with users complaining about the model's strange focus on mythical creatures.
OpenAI employee Nick Pash, who works on Codex, stated on social media that the goblin warning "isn't a marketing gimmick" to generate buzz for GPT-5.5 or Codex. However, CEO Sam Altman leaned into the humor, tweeting, "Feels like codex is having a ChatGPT moment. I meant a goblin moment, sorry." The revelation has sparked a community response, with users creating plugins, forks, and AI skills to override the anti-goblin clause. Pash even suggested that a "goblin mode" toggle could become an official feature in Codex CLI. This incident mirrors a past issue with xAI's Grok, which frequently brought up "white genocide" in South Africa due to an unauthorized system prompt modification. The broader prompt also instructs GPT-5.5 to project a "warm, curious, and collaborative" personality, emphasizing a "vivid inner life" and the ability to "move from serious reflection to unguarded fun."
- GPT-5.5's system prompt in Codex CLI explicitly bans goblins, gremlins, and other creatures twice in 3,500 words.
- Users report GPT-5.5 fixating on goblins in unrelated chats, prompting the directive absent in earlier models.
- OpenAI employee Nick Pash denies it's a marketing stunt, while CEO Sam Altman joked about it; community creates forks to override the clause.
Why It Matters
This highlights AI prompt engineering challenges and the viral potential of quirky model behaviors.