AI Risk Paradigm Shift
Banking has made peace with AI over the last several years. Fraud detection, lending decisions, and process automation each of these carried risk, and institutions learned to manage it. Anthropic Claude Mythos sits in a different category entirely. It was not built as a cyberattack tool. Anthropic designed it to push the boundaries of software engineering, to work with vast and complex codebases in ways earlier models could not. Those very same capabilities, however, are precisely what make it a serious security concern.
The numbers behind that concern are hard to argue with. Work that once took a specialist security team several weeks can now be completed in hours. The complexity of legacy systems, which historically made them difficult to penetrate, no longer offers reliable protection. Anthropic Claude Mythos cuts through that complexity at machine speed. Most large financial institutions are running infrastructure built across decades, stitched together through acquisitions and regulatory pivots. That architecture was never designed to withstand what Mythos represents.
Jamie Dimon captured the tension in his 2025 shareholder letter, committing JPMorgan to embedding AI across everything the bank does, while in the same document flagging that the technology could introduce new cybersecurity vulnerabilities. That contradiction is now the defining strategic problem for every major financial institution sitting with an AI roadmap.
Global Regulatory Response
The speed of the global regulatory response to Anthropic Claude Mythos is itself telling. This was not a slow-moving policy consultation. The Reserve Bank of India engaged directly with the US Federal Reserve and the Bank of England, with the shared agenda of getting ahead of risks before they compound. Japan moved to brief domestic banks directly. Central banks in Australia and New Zealand began active monitoring programmes.
Anthropic responded by launching Project Glasswing, a restricted partner initiative involving Amazon, Apple, and Nvidia, aimed at channelling Mythos capabilities specifically into defensive cybersecurity work. The company confirmed it would not release Anthropic Claude Mythos to the broader market, warning that the fallout for economies, public safety, and national security could be severe if the model reached actors not committed to deploying it responsibly.
Restricting access buys time. It does not solve the structural problem. Other frontier AI models, including OpenAI's GPT-5.4-Cyber and Google's Big Sleep, already carry comparable capabilities. More will follow. Banks are not managing a single vendor risk. They are managing the arrival of an entirely new threat class, one that will keep expanding regardless of what any one company decides about its release strategy.
Real Business Exposure
India's Finance Minister Nirmala Sitharaman convened a meeting with lenders, directing them to take pre-emptive steps to secure IT infrastructure, protect customer data, and defend financial assets against AI-linked threats tied specifically to Anthropic Claude Mythos. Similar conversations, less public but equally urgent, are happening in financial centres across Europe, the Gulf, and North America.
The cost exposure runs across three areas. Security budgets face immediate pressure. Bain and Company's 2025 Cybersecurity Survey found that most organisations are planning annual cybersecurity budget increases of around 10%, a figure that falls well short of what AI-enabled defence infrastructure will actually require.
Regulatory costs will follow close behind. Institutions operating across multiple jurisdictions face the additional burden of aligning with frameworks still being written, often at different speeds and with different priorities.
Vendor relationships will also come under sharper scrutiny. Boards will ask harder questions about which AI partners have credible containment strategies and which are simply racing to market. The answer will shape procurement decisions for years.
Control Defines Advantage
The Anthropic Claude Mythos moment does not signal a retreat from AI in financial services. The economics and competitive dynamics make that impossible. What it signals is a cleaner separation between institutions that have governance architecture capable of scaling AI safely and those that have been treating governance as something to retrofit later.
Experts broadly agree that models in the Mythos class could significantly strengthen long-term cybersecurity, with the future shaped by AI-versus-AI dynamics, systems built to exploit vulnerabilities running against systems built to defend against them. The institutions best positioned for that future are not the fastest adopters. They are the ones who treated control and containment as infrastructure from the start.
The question on every risk committee agenda has quietly changed. It is no longer how fast AI gets deployed. It is whether the institution has the foundations to absorb what comes next without becoming the risk itself.
JMC tracks how frontier AI developments are reshaping risk, regulation, and competitive strategy across global financial systems.



