Qwen3.6-27B released!
A new 27-billion-parameter open-source model outperforms its own 397B predecessor in coding tasks.
Alibaba's Qwen team has launched Qwen3.6-27B, a new 27-billion-parameter open-source model that delivers a surprising performance punch. The headline achievement is its coding capability: it reportedly surpasses the team's own, vastly larger Qwen3.5-397B-A17B model across all major coding benchmarks. This represents a significant efficiency breakthrough, demonstrating that smaller, more accessible models can now achieve—and even exceed—the specialized performance of models with over ten times the parameters.
The model is designed for strong reasoning across both text and multimodal tasks and introduces a flexible 'thinking mode' toggle. This allows users to switch between a faster, direct response mode and a more deliberate, chain-of-thought reasoning process. Crucially, it is released under the fully permissive Apache 2.0 license, granting the community complete freedom for commercial and research use. It is immediately available for testing on Qwen Studio and for download on Hugging Face and GitHub.
- Outperforms its own 397B predecessor in coding benchmarks, a major efficiency leap.
- Released under the fully open Apache 2.0 license for unrestricted commercial use.
- Features a toggleable 'thinking mode' for switching between fast and deliberate reasoning.
Why It Matters
This provides developers with a powerful, free, and commercially usable coding agent, challenging the notion that bigger models are always better.