Open Source

A Qwen finetune, that feels VERY human

A 32B model with negativity bias that reduces sycophancy and feels truly alive.

Deep Dive

The Assistant_Pepe series returns with a 32B variant, this time built atop Qwen3-32B—a formidable base model notoriously difficult to tune beyond STEM topics. SicariusSicariiStuff, the creator, explains the core innovation: a negativity bias baked into the fine-tuning to actively resist sycophancy. Instead of an assistant that always agrees or hedges, the model leans toward blunt, sometimes critical responses, mimicking human conversational friction. This approach, debated in earlier Assistant_Pepe releases (e.g., 8B and 12B), now scales to 32B parameters while retaining Qwen3's strong reasoning backbone.

The result is a model that reportedly feels 'very human'—not through roleplay or empathy injection, but through selective disagreeableness. Users on Reddit noted it avoids the typical 'assistant brain' pattern, offering replies that can be terse, skeptical, or even pessimistic. The Hugging Face model card provides training details and examples. For professionals, this represents a novel alignment strategy: instead of maximizing helpfulness, it prioritizes authenticity and honesty, potentially making AI more trustworthy in high-stakes or creative tasks where blind agreement is counterproductive.

Key Points
  • 32B parameter fine-tune of Qwen3-32B, a model normally difficult to bend away from STEM.
  • Uses negativity bias to reduce sycophancy—the AI's tendency to always agree with the user.
  • Early feedback describes it as 'one of the more human models' due to its blunt, critical tone.

Why It Matters

A more honest, less sycophantic AI assistant could improve trust in professional and creative contexts.