Zhipu AI Open-Sources GLM-5.1 Model, Outperforms GPT-5.4 in Coding
A 744B-parameter open-source model outperforms top proprietary AI in coding while costing nothing to run.
In a defining week for AI accessibility, Chinese AI firm Zhipu AI open-sourced its flagship GLM-5.1 model under a permissive MIT license. The 744-billion-parameter Mixture-of-Experts (MoE) model, which activates 40 billion parameters per forward pass, boasts a 200,000-token context window. Most notably, it reportedly outperforms both OpenAI's GPT-5.4 and Anthropic's Claude Opus 4.6 on SWE-Bench Pro, a benchmark for expert-level software engineering tasks. Its release makes a top-tier coding model freely available for self-hosting, with costs limited to compute infrastructure.
This open-source move stands in stark contrast to Anthropic's simultaneous announcement of Claude Mythos, its most capable model ever. Mythos is locked behind a 'Project Glasswing' firewall, accessible only to roughly 50 selected organizations like AWS, Apple, and JPMorgan for defensive cybersecurity scanning. With preview pricing at $25 per million input tokens and $125 per million output tokens, Mythos represents the high-cost, tightly controlled pole of the AI industry. The week of April 2026 saw eight major model releases, but the story was the philosophical fracture over who gets to use cutting-edge AI: everyone or a select few.
- Zhipu AI's GLM-5.1 is a 744B-parameter open-source model that beats GPT-5.4 on coding benchmarks (SWE-Bench Pro).
- Released under an MIT license, it's free to self-host, contrasting with Anthropic's gated, $125/M token Claude Mythos.
- The simultaneous releases highlight a major industry split between open-weight accessibility and proprietary, controlled deployment for security.
Why It Matters
Professionals now have free, state-of-the-art coding assistance, while enterprises face a strategic choice between open and closed AI ecosystems.