Minimax M2.7 Released
The new 1.4T parameter model offers 128K context and 50% lower API costs.
Minimax, a prominent Chinese AI startup, has officially launched its latest flagship model, M2.7. This 1.4 trillion parameter multimodal system is designed to compete directly with top-tier Western models, demonstrating significant prowess in both English and Chinese. On key benchmarks, M2.7 scored 78.5% on the Massive Multitask Language Understanding (MMLU) test, edging out OpenAI's GPT-4 Turbo. Its performance is particularly strong in Chinese language and reasoning tasks, a critical differentiator in the global AI landscape.
Technically, M2.7 supports a 128,000-token context window, enabling it to process lengthy documents and maintain coherent, long-form conversations. The model is now accessible through Minimax's API, with pricing set aggressively at approximately 50% of the cost for comparable outputs from OpenAI and Anthropic. This strategic pricing, combined with its benchmark performance, makes M2.7 a compelling, cost-efficient option for developers and enterprises building multilingual AI agents and applications that require deep contextual understanding.
- Achieves 78.5% on MMLU benchmark, outperforming GPT-4 Turbo.
- Excels in Chinese language tasks with 1.4 trillion parameters.
- Offers 128K context and API costs roughly 50% less than competitors.
Why It Matters
Provides a high-performance, cost-competitive alternative for global enterprises, especially those requiring robust Chinese language AI capabilities.