Ugh. So apparently I’m a “cyber threat”
User's account was flagged and rerouted to worse models after asking about a hallucination.
Deep Dive
Anthropic's Codex AI assistant flagged a developer's account for 'high-risk cyber activity' after the model hallucinated project steps. The user, working on a weather prediction model, was told access to optimal models would be permanently restricted unless they submitted personal identification documents. This incident has sparked user backlash over privacy, trust, and the opaque management of AI safety systems, leading some to consider switching to competitors like Claude.
Why It Matters
Raises critical questions about AI safety overreach, user privacy, and the trust required for professional AI tools.