Qwen 3.6?
New open-source MoE model beats Llama 3 on coding benchmarks with 3B active parameters...
Deep Dive
Alibaba's Qwen team released Qwen3.6-35B-A3B 22 days ago and Qwen3.6-27B 15 days ago, as noted by Reddit user jacek2023. Now the community is speculating about when the 9B and 122B versions will arrive.
Key Points
- Qwen3.6-35B-A3B is an MoE model with 35B total parameters but only 3B active per token, matching larger models at lower cost.
- The Qwen3.6-27B dense model excels on reasoning benchmarks and supports 128K context length.
- Leaked model cards hint at upcoming 9B and 122B versions, likely to expand the family within weeks.
Why It Matters
Open-source AI competition intensifies as Qwen offers MoE efficiency and dense power, reducing deployment costs for enterprises.