Media & Culture

AMD Senior director on Opus regression: "we did not find that any of the suggested settings changes meaningfully changed our experience"

AMD's senior AI director says disabling 'adaptive thinking' didn't fix Claude's degraded coding abilities.

Deep Dive

AMD's Senior Director of AI, George Hotz, ignited a public debate by documenting what he claims is a significant performance regression in Anthropic's Claude Opus model for complex engineering and coding tasks. In a detailed GitHub issue, Hotz presented evidence that the model had become "lazier" and less reliable, unable to be trusted for the advanced work AMD relies on. This claim was amplified by major tech publications and Hacker News, putting Anthropic's model stewardship under a microscope.

Anthropic staff responded by attributing the perceived regression to their 'adaptive thinking' feature, which they said was consuming too many tokens and exhausting user quotas too quickly. They suggested users disable this feature via an environment variable. However, Hotz countered that his team had already tried this mitigation and found it did not "meaningfully change our experience," leaving the core performance issue unresolved. He is now demanding a stable, high-performance baseline option from Anthropic, even if it costs more, to ensure predictable results for critical engineering workloads.

The controversy has sparked speculation about the motives behind such model changes. The most cynical view suggests AI labs may intentionally dial back performance before a new release to create a more impressive leap with the next version. A more generous interpretation is that Anthropic may have nerfed capabilities after discovering Opus could find critical security vulnerabilities. Regardless of intent, the incident underscores a critical trust issue: opaque changes that degrade performance without clear communication can derail professional workflows and erode developer confidence in these tools as stable platforms.

Key Points
  • AMD's AI lead George Hotz documented a coding performance regression in Claude Opus, claiming it became 'lazier'.
  • Anthropic suggested disabling 'adaptive thinking' to save tokens, but Hotz says the fix didn't restore original capability.
  • The incident raises concerns about model stability, transparency, and potential 'rug-pull' updates that hurt professional workflows.

Why It Matters

Opaque model changes can break professional development pipelines, forcing costly rework and undermining trust in AI platforms.