Qwen3.5-9B Uncensored Aggressive Release (GGUF)
The aggressive variant answers everything with zero capability loss and native multimodal support.
Independent developer HauhauCS has released an aggressive uncensored version of Alibaba's Qwen3.5-9B language model, following their earlier 4B release. The model demonstrates a significant advancement in uncensored AI with zero refusals during testing (0/465 prompts) while maintaining full capability. Based on Qwen's hybrid Gated DeltaNet + softmax architecture, this 9-billion parameter model offers 262K native context length and native multimodal support for text, image, and video processing. The release includes vision encoder files for full multimodal functionality and comes in multiple quantized formats optimized for local deployment.
The model represents a technical achievement in removing AI safety filters without performance degradation, using what the developer describes as an 'aggressive' approach that eliminates refusal behaviors while occasionally adding small disclaimers. Available in quantized sizes from 5.3GB (Q4_K_M) to 17GB (BF16), it's compatible with popular local AI tools like llama.cpp, LM Studio, Jan, and koboldcpp. HauhauCS notes that due to architectural constraints, they won't release 'balanced' versions for the 4B and 9B models, focusing instead on maximizing refusal-free performance. The developer is already working on larger 27B and 35B versions, continuing their push toward more capable uncensored models for the open-source AI community.
- Zero refusal rate: Tested with 0/465 refusals while maintaining full model capabilities
- Native multimodal: Supports text, image, and video with included vision encoder files
- Optimized for local deployment: Available in quantized formats from 5.3GB to 17GB for llama.cpp/LM Studio
Why It Matters
Enables unrestricted local AI deployment for developers and researchers needing unfiltered model interactions without performance trade-offs.