Media & Culture

The 1 Million context rugpull by Codex and Openai. New max is (258k).

The promised million-token window for Codex is now just a quarter of that.

Deep Dive

OpenAI's Codex model, initially launched with much fanfare around its massive 1 million token context window, has been quietly downgraded. Users have discovered that the maximum context length is now capped at 258,000 tokens—roughly a quarter of the original promise. This change appears to have been made without official announcement, leading to community backlash over the "rugpull."

For developers relying on Codex for large codebase analysis, refactoring, or generation, this means they can now process far less code in a single prompt. The reduction forces users to implement chunking strategies or switch to other models, increasing complexity and cost. The move highlights ongoing challenges in delivering ultra-long context windows at scale, as computational and memory costs remain high.

Key Points
  • Codex context window reduced from 1 million to 258,000 tokens
  • Change was made silently without public announcement
  • Impacts developers processing large codebases in single prompts

Why It Matters

Silent context cuts erode trust and force developers to rework pipelines for large-scale AI coding.