Yandex metrika counter
How AI is revolutionizing—and endangering—global cybersecurity
Photo: Etedge-Insights

In 2016, the Pentagon’s research and development agency hosted the first-ever hacking competition fought entirely by machines. Dubbed the “Cyber Grand Challenge,” the contest lasted more than eight hours and ultimately saw a computer system called Mayhem, developed by a team of Pittsburgh-based researchers, crowned victor.

Mayhem would go on to compete against—and lose to—human hackers at DEF CON, one of the world’s largest hacking conferences, in Las Vegas that same year. In the decade since, however, the bots may have gained the (metaphorical) upper hand, News.Az reports, citing The Dispatch.

This month, the American AI company Anthropic announced that its team had built a model, Mythos, capable of discovering and attacking vulnerabilities in software code at a higher level than “all but the most skilled humans.” The pace of AI’s advance raises concerns that it could be used to exploit vulnerabilities in software underpinning the world’s most sensitive infrastructure—from government systems to banks to hospitals. But proponents also argue that, in the right hands, the technology creates new cyberdefense opportunities.

Anthropic, for its part, has argued that Mythos’ development constitutes a turning point in cybersecurity. Deeming the model too dangerous to release to the public, the company shared it with more than 40 technology companies to help them defend against attackers and partnered with 11 of them in an initiative to secure critical software. Anthropic says that this will give defenders a key advantage over attackers, who don’t have access to the model.

But some analysts have argued that Anthropic’s efforts to close cybersecurity gaps may be too late, noting that existing open-source AI models can find and exploit vulnerabilities. “I think a lot of people took the Mythos announcement to be like this capability to discover zero-day vulnerabilities hadn’t existed before, and now suddenly it’s here,” Stanislav Fort, the founder and chief scientist of the cybersecurity startup Aisle. Zero-day vulnerabilities are previously unknown software flaws that attackers can exploit before defenders have a fix ready. One high-profile example is the weakness that allowed Stuxnet, a US-Israeli cyberweapon, to silently disable Iran’s nuclear centrifuges for years before its discovery in 2010.

The rapid advancement of AI amplifies the risk of such attacks, but it can also be used to preempt them. Companies like Aisle, for example, have used the technology to develop methods for autonomously detecting potential vulnerabilities. By building a specialized system using open-weight models, the team has used its tool to identify security weaknesses in vital digital infrastructure, including 12 vulnerabilities in OpenSSL, a software library used to secure most online communications. It also uncovered a 3-year-old security bug in the software that protects transmissions between NASA spacecraft and Earth.

Anthropic’s new cyberdefense initiative has also helped identify key security risks, including in OpenBSD, an operating system used in critical infrastructure like firewalls. For 27 years, the company claimed, the system contained a bug that could remotely crash any computer connected to its network. Yet preexisting tools, too, may be capable of identifying weaknesses like this one. By pointing their tool at the area of vulnerability in OpenBSD and offering “contextual hints,” Aisle researchers claim they were able to identify it at a fraction of the cost. Other AI security companies, including Vidoc Security Labs, also say they have successfully reproduced Anthropic’s findings using public models.

While Mythos is “almost certainly amazing in many things,” Fort said, the AI landscape is best understood as a “jagged frontier” in which different systems have particular strengths and weaknesses, with small models often boasting surprisingly strong capabilities.

Turning these vulnerabilities into exploits is perhaps a tougher skill, yet one that also may not be unique to Claude Mythos. “Mythos the model is like this amazing engine, right? But an engine by itself sitting on a stand inside of a lab like that doesn’t win a race,” said Jamieson O’Reilly, a hacker and co-founder of Aether AI, an AI tool that works to autonomously exploit vulnerabilities. “It needs carbon fiber. It needs ceramic brakes, like world-class suspension, air-force-grade titanium exhaust, all of this stuff, and then a driver,” he told The Dispatch. “Without that, it’s like this impressive piece of hardware with nowhere to go.”

And while Anthropic keeps Mythos under lock and key, other actors—both friends and foes—are working to develop hacking technology to rival the model, and cheaply. “No attacker’s waiting around for Anthropic to release Mythos to the public,” O’Reilly said. “They’re already building their own instrumentation around public models.” O’Reilly claims Aether AI was able to run a simulated attack against a government agency portal, bypassing the defenses on the external website before finding a way to escalate its privileges to an “admin user,” from which it may be able to delete sensitive government data. He also showed The Dispatch a spear-phishing Zoom attack that he had created using existing AI models for under $5.

Fortunately, cyberdefenders seem to have an edge over would-be hackers—at least for now. “AI excels at detection, and that’s most useful for defenders. For offenders, detecting vulnerabilities is only the first step,” Lennart Maschmeyer, an assistant professor of cybersecurity at Georgia Tech, told The Dispatch. “Offenders need to find vulnerabilities and develop working exploits that achieve their specific objectives against a target system, and despite defenders’ efforts to find and neutralize them.”

While Claude Mythos is an important and powerful new development, O’Reilly emphasized that the cybersecurity industry “should not have a full sense of security,” given what is possible using existing models. Marcus Hutchins, a British cybersecurity expert who famously stopped the 2017 WannaCry ransomware attack on hospitals worldwide from his bedroom, likewise highlighted the danger of cruder AI technology. Most attacks, he explained, don’t require sophisticated exploits and can instead be carried out far more cheaply. AI-driven phishing attacks and deepfakes are already earning attackers billions of dollars.

Nevertheless, the U.S. government appears to be taking the potential impacts of Anthropic’s new model seriously. Despite the tech company’s ongoing legal dispute with the Pentagon—and the Trump administration’s designation of Anthropic as a supply chain risk, a first for an American company—the government is reportedly seeking to provide U.S. agencies with access to a version of the model. And time may be of the essence for Washington, as other countries seek to build out their own AI hacking capabilities. In a recent interview with the Financial Times, Anthropic CEO Dario Amodei predicted that Chinese open-source models would be capable of replicating Mythos in six to 12 months.

Digital infrastructure underpins much of modern society, as a recent attack against the Mexican government demonstrated. The breach, which used an earlier Claude model and OpenAI’s ChatGPT, accessed more than 150 gigabytes of government data, including voter information and tax records.

Ultimately, these AI-enabled hacks put everyday citizens and their personal information at risk, as Ryan Fedusiak, a fellow at the American Enterprise Institute, knows better than most. Falling victim to a cyberattack by a state actor “forced me to reset my entire digital life, bottom to top,” he told The Dispatch.


News.Az 

By Leyla Şirinova

Similar news

Archive

Prev Next
Su Mo Tu We Th Fr Sa
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31