Yandex metrika counter
 Washington and Brussels shift to high-risk AI regulation with new penalties and safeguards
Source: The New York Times

Editor’s note: Faig Mahmudov is a journalist based in Azerbaijan. The views expressed are the author’s own and do not necessarily reflect those of News.Az.

Over the past two days, both the United States and Europe have issued new statements signaling that AI regulation has entered a more advanced and urgent phase. The rapid growth of generative AI, the rising influence of large language models, and the spread of deepfake manipulation have pushed governments to establish stronger safeguards.

At the same time, geopolitical competition, especially with China, has turned AI development into a strategic priority. The latest announcements reveal that both Washington and Brussels now view AI not only as a technological issue but also as a matter of national security, political stability, and economic competitiveness.

In the United States, the White House expanded its AI safety and transparency directives, stating that major AI companies must conduct more rigorous model testing and share risk-assessment findings with federal agencies. New federal guidelines require companies to perform “systemic risk evaluations” for models capable of influencing critical infrastructure, financial markets, or national security. Officials emphasized that the government must be able to “see inside” high-impact AI systems rather than rely solely on voluntary company reporting.

Burying the Brussels Effect? AI Act Inspires Few Copycats - CEPA

Source: Reuters

A significant new development is the announcement of a nationwide AI election-protection framework. With the 2024 election cycle still influencing political debate, U.S. officials confirmed that generative AI tools used to create synthetic political audio or video may fall under federal criminal scrutiny if deployed for voter deception. The Department of Justice stated that it is preparing new enforcement mechanisms to prosecute AI-generated election manipulation. This marks the first time the U.S. has outlined federal-level legal consequences specifically for political deepfakes.

Meanwhile, Congress continues to move toward more structural regulation. The bipartisan AI legislation currently being drafted gained new momentum after several senators proposed that models above a certain computational threshold should require federal licensing. Lawmakers are pushing for a mandatory auditing system for training datasets to identify copyrighted materials, harmful content, and sensitive personal data.

Another newly introduced proposal would require AI companies to maintain detailed logs of training data sources and model fine-tuning history for federal review.

Possible Delay in Implementing EU AI Act Now Formally on the Table

Source: Pymnts

On the European side, the European Commission published updated guidance on the phased implementation of the AI Act. The latest statement confirms that member states must establish specialized AI supervisory authorities by early 2025. These authorities will oversee risk classification, compliance audits, and penalties.

The Commission also stressed the urgency of developing a unified testing framework to evaluate the reliability, bias levels, and safety characteristics of generative models before they can be deployed across the EU market.

A major new European initiative announced this week is the “election integrity protocol for AI-generated content.” Ahead of the 2025 European Parliament elections, Brussels plans to introduce stricter obligations for platforms hosting political content. AI-generated images, audio, and video related to elections must carry clear labels, and platforms will be held legally responsible for failing to remove deceptive deepfakes in a timely manner. The Commission stated that it is preparing expedited procedures for removing synthetic political content designed to mislead voters.

Can We Trust AI to Detect AI? The Reliability of AI-Generated Content Tools

Source: CNN

Another important European development is the announcement of new requirements for foundation model developers. Companies building advanced generative systems must disclose model architecture summaries, training compute levels, and all forms of automated content generation that could create security risks. The EU is also preparing additional rules related to watermarking and traceability mechanisms to detect AI-generated content across the digital ecosystem.

Recent events have also highlighted a growing transatlantic alignment. U.S. and EU officials held new consultations on AI safety cooperation, signaling interest in creating a joint risk-monitoring platform. The discussions include shared model-evaluation standards, cross-Atlantic threat intelligence on AI-enabled cyberattacks, and coordinated responses to harmful deepfake campaigns. Both sides recognize that AI risks cannot be contained within borders and that fragmented regulatory approaches would favor authoritarian competitors.

Despite these advances, serious challenges remain. Both regions face strong opposition from parts of the technology industry, which argue that excessive regulation could slow innovation and push companies to relocate to less-regulated markets.

AI Dilemma: Regulation in China, EU & US - Comparative Analysis

Source: AI

High compliance costs, uncertainty around enforcement, and rapid technological change make regulatory planning more complex. Additionally, both the U.S. and Europe struggle with how to handle open-source AI models, which can be freely modified and redistributed, potentially undermining safety standards.

Nevertheless, the political direction is clear. The United States is increasingly focused on security, election protection, and competition with China, while Europe emphasizes rights, ethics, and strict legal compliance. Yet both agree that unmanaged AI growth poses risks for democracy, economic stability, and societal trust. This shared understanding is driving the regulatory shift now unfolding on both sides of the Atlantic.

In the coming months, further regulatory tightening is expected. The U.S. is preparing new federal standards for model evaluation and data-traceability requirements, while the EU is finalizing mandatory transparency rules for foundation models and deploying election-integrity safeguards.

Together, these steps mark the emergence of a new global regulatory architecture for AI — one that will shape innovation, competition, and political stability for years to come.


(If you possess specialized knowledge and wish to contribute, please reach out to us at opinions@news.az).

News.Az 

Similar news

Archive

Prev Next
Su Mo Tu We Th Fr Sa
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31