The Crypto Bros Want to Get Their Hands on Anthropic’s ‘Super Dangerous’ Model
Crypto giants like Coinbase are trying to access a model that can find 30-year-old security flaws.
Anthropic has created a new, highly capable AI model named Claude Mythos, but is deliberately limiting its release. The company's concern stems from the model's exceptional ability to identify and potentially exploit critical cybersecurity vulnerabilities. According to reports, Mythos has already demonstrated this power by spotting security flaws in legacy systems that had remained hidden from human experts for nearly three decades. This capability has made the model a coveted tool for industries under constant threat, particularly cryptocurrency.
Cryptocurrency exchanges like Coinbase and Binance, which safeguard billions in digital assets, are actively lobbying Anthropic for access to Claude Mythos. They aim to use the AI for proactive security testing, or 'pentesting,' to find and patch vulnerabilities before malicious actors can exploit them. Fireblocks, another crypto custodian, reported that Anthropic's publicly available models have already uncovered issues missed by human testers. However, Anthropic remains cautious, fearing that widespread access could enable bad actors to use the model for offensive cyberattacks at scale. In a parallel development, Bloomberg reports that OpenAI is preparing its own limited-release cybersecurity tool, signaling a new competitive frontier in AI-powered security.
- Anthropic's Claude Mythos model can find cybersecurity flaws that eluded detection for up to 30 years.
- Major crypto firms like Coinbase and Binance are seeking access to use Mythos for defensive security testing.
- Anthropic is restricting access over fears the model could be weaponized for large-scale cyber attacks.
Why It Matters
This highlights the dual-use dilemma of advanced AI: a powerful tool for defense is also a potent weapon, forcing companies to balance innovation with security.