Yandex metrika counter
OpenAI, Anthropic brief US lawmakers on AI cyber threats
Source: Shutterstock

Tech companies OpenAI and Anthropic have briefed staff of the US House Homeland Security Committee on their latest artificial intelligence models and the potential implications for cybersecurity, News.Az reports, citing Axios.

The discussions marked one of the first known briefings between lawmakers and leading AI firms focused specifically on the cyber risks posed by next-generation AI systems, according to the report.

Anthropic has reportedly delayed the public release of its Mythos Preview model, citing concerns that it could be misused to exploit security vulnerabilities. OpenAI, meanwhile, has adopted a tier-based release strategy for its GPT-5.4-Cyber model.

The report said both companies held separate classified sessions with congressional staff last Thursday.

A committee aide described the meetings as “proactive engagement with these companies on recent frontier model developments,” particularly regarding their potential impact on cybersecurity for critical infrastructure.

The aide also referenced a recent White House memo alleging that China is conducting “industrial-scale” efforts to distill and replicate US AI models.

House Homeland Security Chair Andrew Garbarino has been conducting private roundtables with technology and AI leaders, according to the report. The committee has also held multiple hearings examining generative AI’s national security risks, including threats linked to state-backed cyberattacks.

“Productive partnerships between industry and government are essential to help us stay ahead of the evolving threat landscape, ensure the government is prepared to securely harness AI for its defensive capabilities, and support and protect American AI development as adversaries like China seek to gain an advantage by any means,” Garbarino told Axios.

Committee members also said another briefing held last week focused on so-called “jailbroken” AI models—systems modified to bypass safety safeguards—which they said heightened concerns about regulation.

Demonstrations reportedly showed how such tools could be used to facilitate scenarios such as school shootings or bombings.

“What I just saw in there, with just a short amount of time typing in questions, is very scary. These models are very powerful,” said Representative August Pfluger after the briefing.

“We see how powerful it is, and it should be used for good, but guardrails need to be attached... Congress and the executive branch need to work with our industry partners to make sure that we keep kids safe.”


News.Az 

By Nijat Babayev

Similar news

Archive

Prev Next
Su Mo Tu We Th Fr Sa
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31