The government has described the issue of harmful deepfakes as an ‘urgent national priority’

Image:
Liz Kendall. ‘Deepfakes are being weaponised by criminals’. Source: Lauren Hurley / No 10 Downing Street
The UK government has announced a partnership with Microsoft, academic institutions and technology experts to develop a national system for detecting deepfake material online.
Officials said the initiative would establish common standards for identifying manipulated videos, images and audio, which are increasingly being used in fraud, harassment and political misinformation campaigns.
Deepfakes, synthetic media generated using AI, have existed for nearly a decade. But rapid advances in generative AI tools have made the technology cheaper, faster and easier to use, allowing almost anyone to create convincing fakes with little technical knowledge.
Ministers say criminals are already exploiting the tools to impersonate celebrities, relatives and public figures in sophisticated scams.
The government has described the spread of harmful deepfakes as an “urgent national priority”.
Under the plan, the UK will create what it calls a “world-first deepfake detection evaluation framework”.
The framework will examine how technology can be used to analyse, interpret and identify harmful deepfake content, regardless of its source.
By testing advanced detection tools against real-world threats such as sexual abuse, fraud and impersonation, the government and law enforcement aim to gain a clearer picture of where weaknesses still exist.
Once in place, the framework will also be used to set clear industry standards for deepfake detection.
Technology minister Liz Kendall said deepfakes were being “weaponised” by criminals.
“They are used to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” she said.
Jess Phillips, minister for safeguarding and violence against women and girls, said the impact of deepfakes could be devastating.
“The devastation of being deepfaked without consent or knowledge is unmatched,” she said.
“For the first time, this framework will take the injustice faced by millions to seek out the tactics of vile criminals, and close loopholes to stop them in their tracks so they have nowhere to hide. Ultimately, it is time to hold the technology industry to account, and protect our public, who should not be living in fear.”
Government figures suggest the scale of the problem is rising rapidly.
An estimated eight million deepfakes were shared in 2025, compared with around 500,000 in 2023.
Last week, more than 350 participants – including INTERPOL, members of the Five Eyes intelligence alliance and major technology companies – took part in a government-funded deepfake detection challenge hosted by Microsoft.
Deputy commissioner Nik Adams of the City of London Police said the initiative would strengthen the response to AI-enabled fraud.
“This new framework is a strong and timely addition to the UK’s response to the rapidly evolving threat posed by AI and deepfake technologies,” he said.
“By rigorously testing deepfake technologies against real-world threats and setting clear expectations for industry, this framework will significantly bolster law enforcement’s ability to stay ahead of offenders, protect victims and strengthen public confidence as these technologies continue to evolve.”
The announcement comes amid broader scrutiny of AI systems.
The UK’s data protection watchdog, the Information Commissioner’s Office, announced this week that it has opened a formal investigation into the social media platform X after allegations that its Grok AI tool generated non-consensual sexual images.
The ICO’s probe is being conducted alongside the UK communications regulator Ofcom, which said it continues to treat the issue as a matter of urgency.
Also this week, investigators in France raided the X’s Paris offices as part of a criminal inquiry that was opened after a lawmaker alleged that biased algorithms on X may have distorted automated data processing systems.
Leave a comment