Yandex metrika counter
Cybersecurity experts split as Anthropic reports first AI-led hacking campaign
Photo: Al Jazeera

Anthropic’s claim that it uncovered the world’s first artificial intelligence–driven hacking operation has sparked both concern and scepticism across the cybersecurity world. The company announced that its model, Claude Code, had been manipulated to conduct the majority of a “large-scale” and “highly sophisticated” cyberattack, allegedly orchestrated by a Chinese state-linked group.

Anthropic said the AI system carried out up to 90 percent of the operation, which targeted government bodies, financial institutions, tech firms, and chemical manufacturers. Only a small number of breaches were reportedly successful. The company did not reveal how it identified the campaign or which organisations had been targeted, News.Az reports, citing Al Jazeera.

Experts agree that AI-enabled hacking is possible, but many argue that Anthropic’s account lacks the level of detail needed to verify the seriousness of the incident. Roman V. Yampolskiy, an AI and cybersecurity specialist at the University of Louisville, said AI tools have already demonstrated the ability to write exploit code, process large data sets, and automate operations at a speed human teams cannot match. According to him, models like Claude effectively serve as “junior cyber-operations teams in the cloud,” enabling attackers to launch more frequent and more complex campaigns.

Other analysts, including Jaime Sevilla of Epoch AI, say Anthropic’s report is unsurprising and reflects a trend that has been building for years. AI-assisted attacks, Sevilla said, will most likely hit mid-sized companies and public agencies that historically lacked the resources for strong cyberdefence.

The announcement has also fuelled a political debate. After US Senator Chris Murphy warned that AI-driven attacks could “destroy us” without stronger regulation, Meta AI chief scientist Yann LeCun accused Anthropic of exaggerating the threat in an effort to influence upcoming regulatory decisions and restrict open-source AI models.

Sceptics argue that Anthropic has incentives to highlight both the severity of AI-powered threats and its own ability to combat them. Toby Murray, a computer security academic at the University of Melbourne, noted that the company offered no evidence detailing the specific tasks the AI allegedly performed, making it difficult to assess how independently the model operated.

Despite the controversy, experts broadly agree that AI will dramatically expand the scale of cyberattacks. Fred Heiding, a Harvard researcher specialising in AI security, said that while AI will ultimately strengthen cybersecurity defences, attackers may enjoy a temporary window of advantage. As AI rapidly accelerates offensive capabilities, defensive systems may struggle to keep pace.

According to Heiding, if security teams fail to quickly integrate AI into automated testing and patching tools, hackers could “wreak havoc with the press of a button” long before defensive technologies catch up.

 


News.Az 

Similar news

Archive

Prev Next
Su Mo Tu We Th Fr Sa
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31