Trump officials may be encouraging banks to test Anthropic’s Mythos model
Treasury officials encourage major banks to test a powerful AI model the government is simultaneously suing.
In a striking development, senior Trump administration officials, including Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, have reportedly encouraged executives from major US banks to test Anthropic's newly announced Mythos AI model. According to Bloomberg, the officials summoned bank leaders to a meeting this week to promote using the model for detecting system vulnerabilities. Major institutions like JPMorgan Chase (an initial partner), Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are now reportedly testing the model.
This government encouragement creates a stark contradiction, as Anthropic is currently engaged in a legal battle with the same administration. The Department of Defense has designated Anthropic a 'supply-chain risk' after negotiations broke down over limits on government use of its AI. Simultaneously, UK financial regulators are discussing the potential risks posed by the Mythos model. Anthropic itself has stated it is limiting broad access to Mythos because, despite not being specifically trained for cybersecurity, it is exceptionally proficient at finding security flaws—a capability some observers suggest could be part of a savvy enterprise sales strategy.
- US Treasury and Fed officials encouraged major banks to test Anthropic's Mythos AI for security flaws.
- The push occurs while Anthropic is in a legal fight with the Trump administration over a DoD 'supply-chain risk' label.
- Anthropic is limiting Mythos access, citing its unexpected, high proficiency in finding vulnerabilities despite no specific cybersecurity training.
Why It Matters
High-stakes AI adoption in finance collides with geopolitical tensions, testing regulatory boundaries and corporate-government relations.