Open Source

MiniMax M2.7 has been leaked

Leaked docs reveal MiniMax's next AI model with massive 1 million token context window.

Deep Dive

Details of MiniMax's unreleased M2.7 large language model have surfaced online through leaks posted to DesignArena and briefly accessible internal documentation. The leaked information, which was quickly removed but captured by users, reveals key technical specifications that suggest a major leap in the company's AI capabilities. The model reportedly features a massive 1 million token context window, allowing it to process and reason over extremely long documents, books, or codebases in a single session. Furthermore, it can generate outputs up to 128,000 tokens long, enabling the creation of extensive reports, stories, or analyses without interruption.

This leak positions the M2.7 as a formidable competitor in the high-end AI model race, directly challenging OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet, which offer 128K and 200K context windows respectively. The 1M token context is a standout feature that would give MiniMax a significant edge in applications requiring deep, long-context understanding, such as legal document review, academic research synthesis, or complex software development. The rapid removal of the documentation indicates an official release may be imminent, forcing competitors to respond to this new benchmark for context length and output capacity.

Key Points
  • Leaked specs show a 1 million token context window, far exceeding most current models.
  • Capable of generating 128,000 token outputs for long-form content creation.
  • Positions MiniMax as a direct competitor to OpenAI and Anthropic in the high-end model tier.

Why It Matters

A 1M token context could revolutionize long-document analysis, legal tech, and code generation, raising the bar for all AI providers.