Claude Mythos Preview Has Officially Frightened the British
Anthropic's unreleased AI model reportedly exploits decades-old vulnerabilities, triggering urgent UK government meetings.
Anthropic's secretive new AI model, Claude Mythos Preview, has reportedly demonstrated alarming cybersecurity capabilities, triggering an urgent response from UK financial and government authorities. According to Anthropic's own Frontier Red Team blog, the model can autonomously identify and exploit zero-day vulnerabilities in every major operating system and web browser. The exploits discovered are often subtle and include a now-patched 27-year-old bug in the famously secure OpenBSD system. This has led the Bank of England, the Financial Conduct Authority, and the UK Treasury to schedule "urgent discussions" with the National Cyber Security Centre within the next fortnight, with the issue becoming a top priority for the Cross Market Operational Resilience Group.
The model's capabilities were revealed through Anthropic's 'Project Glasswing' initiative, which the company frames as a responsible warning about future AI risks. However, the revelation has been met with skepticism and concern. Critics like rationalist blogger Zvi Mowshowitz argue Anthropic is mixing valid analysis with hype, while AI researcher Yann LeCun has downplayed the threat. The situation is complicated by the fact that no independent third party has been granted unfettered access to Claude Mythos Preview to verify Anthropic's claims, leaving regulators to grapple with the potential implications of an AI that could automate sophisticated cyber attacks.
- Claude Mythos Preview can find and exploit zero-day vulnerabilities in all major OSes and browsers, per Anthropic's testing.
- The model uncovered a 27-year-old, now-patched bug in OpenBSD, an OS renowned for its security focus.
- UK financial regulators and cybersecurity officials are holding urgent meetings to assess the national security and market risks posed by such AI capabilities.
Why It Matters
The incident highlights the dual-use risk of advanced AI, where tools for security research could be repurposed to automate cyber attacks at scale.