Open Source

Qwen3.6-35B-A3B Uncensored Aggressive is out with K_P quants!

New 35B parameter model achieves 0/465 refusals with zero capability loss and optimized K_P quants.

Deep Dive

Independent developer HauhauCS has launched Qwen3.6-35B-A3B-Uncensored-Aggressive, a specialized variant of Alibaba's Qwen language model that completely removes content restrictions while maintaining full capabilities. The 35B parameter Mixture-of-Experts model uses 256 experts with 8 routed per token, offering 262K context window and multimodal support for text, image, and video inputs. What makes this release notable is its 'aggressive' designation—meaning it shows zero refusals (0/465 in testing) without personality alterations or capability loss, addressing a common pain point for developers needing uncensored AI for research and specialized applications.

The model ships with HauhauCS's custom K_P quantization, which uses model-specific analysis to preserve quality where it matters most. These quants provide 1-2 quality levels of uplift at approximately 5-15% larger file sizes compared to standard quantization methods. Fully compatible with popular inference frameworks like llama.cpp and LM Studio (though Ollama may require additional configuration), the release includes multiple quantization levels from Q8_K_P down to IQ2_M, plus vision support via mmproj. Users should note the need for the --jinja flag in llama.cpp and potential cosmetic display issues in LM Studio's interface.

Available on Hugging Face with comprehensive documentation, this release represents a significant step forward for open-source, uncensored AI models that maintain enterprise-grade capabilities. The developer has also established a Discord community for updates and collaboration, signaling growing interest in specialized model variants that prioritize functionality over content filtering.

Key Points
  • 35B parameter MoE model with 0/465 refusals and no capability degradation
  • Custom K_P quantization provides 1-2 quality levels uplift at 5-15% larger file sizes
  • 262K context window with multimodal (text+image+video) support and hybrid attention

Why It Matters

Enables uncensored AI applications for research and development without sacrificing model capabilities or performance.