Viral Wire

Alibaba Open-Sources Qwen3.6-27B and Qwen3.6-35B-A3B Models, Outperforming Google Gemma 4 in Coding

Open-source Qwen3.6-35B-A3B leads Gemma 4 by 21 points on SWE-bench

Deep Dive

Alibaba has released two new open-source models under its Qwen family: the dense Qwen3.6-27B with 27 billion parameters and the sparse Qwen3.6-35B-A3B, which activates only 3 billion parameters despite its 35 billion total. The sparse model is a clear highlight, outperforming Google's Gemma 4 26B A4B by 21 percentage points on the SWE-bench Verified benchmark for agentic coding. This means the Qwen3.6-35B-A3B can solve complex software engineering tasks with higher accuracy while using fewer active parameters, making it more efficient for deployment.

Both models are designed for advanced agentic coding and multimodal reasoning, enabling tasks like code generation, debugging, and visual question answering. The open-source release allows developers to fine-tune and deploy these models on their own infrastructure without API costs. This positions Alibaba as a strong contender against Google and Meta in the open-source AI race. Developers can access the models on Hugging Face and GitHub now.

Key Points
  • Qwen3.6-35B-A3B activates only 3B of its 35B total parameters, yet beats Google Gemma 4 26B A4B by 21 points on SWE-bench Verified for agentic coding.
  • Two model variants released: dense 27B-parameter Qwen3.6-27B and sparse Qwen3.6-35B-A3B, both optimized for coding and multimodal reasoning.
  • Models are fully open-source, available on Hugging Face and GitHub, allowing developers to self-host and fine-tune without API costs.

Why It Matters

Alibaba's efficient sparse model challenges Google's dominance, giving developers cheaper, open-source access to top-tier coding AI.