Although manipulated media has existed online for decades, the rapid spread of generative AI tools—accelerated by the launch of ChatGPT and similar systems—has heightened concerns about the scale, sophistication, and realism of deepfakes, News.Az reports, citing Reuters.
Britain, which recently criminalised the creation of non-consensual intimate images, said it is developing a deepfake detection evaluation framework aimed at setting consistent standards for assessing detection tools and technologies.
RECOMMENDED STORIES
“Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” technology minister Liz Kendall said in a statement.
According to the government, the framework will assess how technology can be used to evaluate, understand, and detect harmful deepfake content, regardless of its source. Detection tools will be tested against real-world threats such as sexual abuse, fraud, and impersonation.
The initiative is intended to help the government and law enforcement better identify gaps in existing detection capabilities, while also setting clear expectations for industry standards on deepfake detection.
Government figures show that an estimated 8 million deepfakes were shared in 2025, up sharply from around 500,000 in 2023.
Governments and regulators worldwide, struggling to keep pace with the rapid evolution of artificial intelligence, were pushed into action this year after Elon Musk’s chatbot Grok was found to generate non-consensual sexualised images of people, including children.
Britain’s communications watchdog and privacy regulator are currently conducting parallel investigations into Grok.





