Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, report claims
A private Discord group gained access to the powerful Claude Mythos Preview through a third-party vendor.
Anthropic is investigating a significant security incident involving its exclusive Claude Mythos Preview, a powerful AI tool designed for enterprise cybersecurity. According to a Bloomberg report, a private online forum, whose members remain unidentified, gained unauthorized access to the tool through a third-party vendor environment. The group, which operates a Discord channel dedicated to finding information on unreleased AI models, reportedly accessed Mythos on the very day it was publicly announced. They provided evidence to Bloomberg in the form of screenshots and a live demonstration, confirming their access. Anthropic stated that, so far, its investigation has found no evidence that this activity impacted its own internal systems.
The breach reportedly occurred via a person employed at a third-party contractor working for Anthropic. The group made an "educated guess" about the model's online location based on Anthropic's known naming conventions for other models. While the source told Bloomberg the group is "interested in playing around with new models, not wreaking havoc," the incident underscores the inherent risk Anthropic warned about: that such a potent security tool could be weaponized if it fell into the wrong hands. Mythos was released under tight controls to select vendors like Apple as part of Project Glasswing, specifically to prevent misuse. This unauthorized access, even if seemingly benign, challenges those controls and could damage trust in Anthropic's ability to securely distribute sensitive AI technology to enterprise clients.
- Unauthorized group accessed Anthropic's Claude Mythos Preview via a third-party vendor on its launch day.
- The group provided Bloomberg with screenshots and a live demo, confirming their access to the powerful AI security tool.
- Anthropic's investigation found no evidence its own systems were compromised, but the breach highlights distribution risks for sensitive AI.
Why It Matters
This breach tests enterprise trust in AI security tools and exposes the risks of third-party distribution chains for sensitive technology.