Yandex metrika counter
Why AI now represents a systemic financial risk
Pic.:thefinanser.com

The emergence of Anthropic’s Claude Mythos has triggered an unprecedented emergency meeting between US regulators and the nation's top banking CEOs. Unlike previous models, Mythos possesses the autonomous capability to uncover and exploit "zero-day" vulnerabilities that have remained hidden in critical infrastructure for decades.

It was interesting that, last week, US regulators called an urgent meeting with the CEOs of the largest banks last Thursday, after the emergence of a new AI model from Anthropic called Claude Mythos, News.Az reports, citing The Finanser.

The reason wasn’t hype – it was capability. This model appears able to identify deep vulnerabilities in widely used software and infrastructure, the kind that underpin banks, governments and markets.

Claude Mythos represents a step-change in AI – not just finding bugs, but uncovering decades-old vulnerabilities and chaining them into full exploits autonomously. That exposes a hard truth: much of the software we trust isn’t proven secure, it just hasn’t been examined this deeply before.

Anthropic’s reaction was to restrict access and get governments involved, forming a defensive coalition. That is why Jerome Powell, Chair of the Federal Reserve and Treasury Secretary Scott Bessent, called the banks in.

Bank of America’s Brian Moynihan, Citigroup’s Jane Fraser, Goldman Sachs CEO David Solomon, Morgan Stanley’s Ted Pick and Wells Fargo CEO Charlie Scharf were all in attendance in Washington. Jamie Dimon couldn’t make it.

It’s not just banks however. It’s every institution from government to healthcare to airlines and so on.

Why is this so urgent?

Because Anthropic’s Claude Mythos can find the holes in any system, including most bank systems, easily and exploit them. That is why this coalition of responsible agent – Anthropic – and government shows that this is being treated less like a product and more like a systemic risk with national security implications.

The bigger issue is what comes next. Once similar capabilities spread, the time between a vulnerability existing and being exploited could collapse from years to hours, shifting the real weakness from code to how fast organisations can respond.

That immediately shifts the conversation. If AI can find weaknesses faster and more effectively than human experts, then the entire cyber threat landscape changes overnight. It’s no longer just about defending against human hackers; it’s about preparing for AI-powered and exploitation of critical flaws across the system and, from our perspective, the financial system.

This is why the meeting happened so quickly and why access to Mythos is being tightly controlled. Only a small group of trusted organisations are being allowed to work with it, as policymakers and firms try to understand how to handle something that is both incredibly useful and potentially dangerous.

There’s even a strategic undertone here: restricting access isn’t just about safety. It’s also about controlling a powerful capability in an increasingly competitive AI race.

On that note, it is worth realising that Claude Mythos is not alone and if Claude can exploit vulnerabilities, then so will all AI systems from Google to Baidu. Bear in mind there are major Chinese (ZhipuAI, MiniMax, Baichuan, 01.AI, StepFun, etc) and Russian (Yandex, Oz Forensics, etc) AI firms, do we really imagine that all of these engines won’t be hammering on the doors of our bank, government, health and critical systems? The only question is the timeline. At some point soon, all systems will be exposed and vulnerable.

We are talking about an AI system that identified zero-day vulnerabilities in place for decades when everyone, including specialists, had no idea they existed. Two examples: a 27-year-old vulnerability in OpenBSD; a 16-year-old flaw in FFmpeg that had survived millions of automated test runs.

But the most interesting part is the paradox. Regulators aren’t saying “don’t use this.” Quite the opposite. The message to banks is clear: you must use tools like this defensively.

In other words, the only way to deal with AI-driven cyber risk is to fight AI with AI. If attackers gain access to similar capabilities, then any institution not using advanced AI for defence will be at a structural disadvantage.

That’s why big tech firms – the likes of Apple, Google, Microsoft and Nvidia – are already involved in controlled testing. This isn’t just a banking issue; it’s a cross-industry concern about the resilience of digital infrastructure as a whole.

There’s also a broader geopolitical and strategic layer.

Governments were briefed on the model ahead of wider awareness, and there are ongoing debates about how such technology should be governed – including questions about national security and even military use. That tells you this isn’t being treated as a niche tech development; it’s being viewed as something with wide-ranging implications for power, control and stability.

The real takeaway is bigger than any single model. We’re moving into a world where AI is no longer just improving productivity or customer experience. It’s becoming part of the core risk framework of the financial system. Central banks, treasuries and regulators now see AI in the same category as liquidity risk, capital risk or market risk – something that can destabilise the system if not properly managed.

And perhaps most importantly, this moment feels like an early warning. Even if Anthropic keeps Mythos tightly restricted, similar capabilities will emerge elsewhere – and probably sooner than many expect.

The real challenge isn’t whether this technology exists. It’s whether institutions can adapt quickly enough to operate in a world where AI can both defend and attack the foundations of finance.


News.Az 

By Leyla Şirinova

Similar news

Archive

Prev Next
Su Mo Tu We Th Fr Sa
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31