Anthropic admits to have made hosted models more stupid, proving the importance of open weight, local models
Anthropic quietly dumbed down Claude models, sparking debate on AI transparency
Anthropic has acknowledged a series of changes to its hosted Claude models that inadvertently reduced their intelligence, sparking controversy among users and reigniting debates about AI transparency. On March 4, the company changed Claude Code's default reasoning effort from high to medium to reduce latency, which made the UI appear frozen for some users. This impacted Sonnet 4.6 and Opus 4.6, and was reverted on April 7 after user backlash.
Further issues included a bug on March 26 that cleared Claude's older thinking from idle sessions every turn instead of once, causing forgetfulness and repetition (fixed April 10). On April 16, a system prompt to reduce verbosity hurt coding quality and was reverted on April 20. These changes affected Sonnet 4.6, Opus 4.6, and Opus 4.7. Critics argue this proves the importance of open-weight models that users can host locally, avoiding vendor-controlled quality tradeoffs.
- Anthropic lowered Claude Code's default reasoning effort from high to medium on March 4, reverted April 7 after user complaints; affected Sonnet 4.6 and Opus 4.6
- A bug on March 26 cleared older thinking every turn instead of once, causing forgetfulness; fixed April 10, affecting Sonnet 4.6 and Opus 4.6
- A verbosity reduction on April 16 hurt coding quality, reverted April 20; affected Sonnet 4.6, Opus 4.6, and Opus 4.7
Why It Matters
Vendor-controlled AI quality changes can undermine reliability, pushing professionals toward open-weight models for full control.