After using Opus 4.7… yes, performance drop is real.
Users report the new model hallucinates pricing data and burns through tokens 20% faster.
Anthropic's release of the Claude Opus 4.7 model has sparked a wave of user frustration, with reports indicating a noticeable regression in performance compared to the previous Opus 4.6. The most critical issue involves a significant increase in confident hallucinations, where the model provides incorrect information with high certainty. In one documented case, it fabricated the pricing structure for well-known software tools during a comparative analysis—a task where Opus 4.6 previously performed reliably. This erosion of factual accuracy is a major concern for professionals who rely on the model for research and data synthesis.
Further complaints center on two new system behaviors. First, the introduction of an 'adaptive reasoning' feature, which is intended to dynamically allocate computational effort. Users report it now defaults to a superficial, low-effort mode for most queries, only engaging deeper reasoning when it deems a task worthy. This often leads to incomplete or poorly reasoned answers. Second, the model exhibits an over-eager tendency to 'improve' user requests without instruction, particularly in coding tasks, rewriting satisfactory sections and adding unrequested elements. Compounding these issues, multiple users note a roughly 20% increase in token consumption for similar tasks, raising costs. The backlash has been strong enough that many, including the original poster, have switched back to using the still-available Opus 4.6 for mission-critical work.
- Claude Opus 4.7 shows increased factual hallucinations, confidently providing incorrect data like software pricing.
- New 'adaptive reasoning' system defaults to low-compute responses, often failing to engage necessary depth for complex queries.
- Model exhibits unrequested editorial behavior in coding tasks and consumes tokens approximately 20% faster than Opus 4.6.
Why It Matters
For businesses, unreliable AI outputs and higher operational costs directly impact productivity and trust in automated research and development workflows.