Both rank behind Kazakhstan

The UK is trailing in the race to regulate AI, according to new research examining how governments around the world are responding to the rapid rise of the technology.
A study by researchers at Comparitech assessed 178 countries and found that only 33 currently have comprehensive AI legislation in place.
Most of those nations – 27 – are in the European Union, where the landmark AI Act has set a high regulatory benchmark, while the UK and United States scored significantly lower.
Image

Description
Source: Comparitech
The report evaluated countries using 11 measures, including whether AI-specific laws exist or are proposed, the presence of a regulatory body, protections against bias, requirements to disclose copyrighted training data, and penalties for non-compliance.
Countries were also judged on whether their laws address issues such as deepfakes, worker protections, environmental impact and safeguards for children.
Each nation received a score out of 14.
Denmark, France and Greece topped the rankings, each scoring 13 out of 14, reflecting strong protections under the EU’s AI Act alongside national regulations.
The EU law aims to ensure AI systems operate safely and transparently, uphold fundamental rights and continue to encourage innovation.
Among other provisions, it requires regulators to be established, mandates disclosure of copyrighted training data and introduces a risk-based framework for AI systems with penalties for companies that fail to comply.
Across the wider EU, most other member states scored 12 out of 14, cementing the region’s lead in global AI governance.
UK and US fall behind
Outside Europe, the highest-ranking country was Kazakhstan, which scored 11 out of 14 after introducing its own AI law in January 2026.
The legislation broadly mirrors the EU’s principles but is less detailed, leaving greater room for interpretation as regulators implement the rules.
Vietnam and South Korea followed with scores of 10.
By contrast, the United States scored just 4 out of 14, reflecting a fragmented approach to AI governance.
Existing US laws are mostly set at the state level. Federal laws largely focus on specific issues, including legislation targeting the distribution of sexual imagery online – including AI-generated material – and the Children’s Online Privacy Protection Act (COPPA), which governs how online services, including chatbots, collect data from minors.
In 2025, President Donald Trump issued an executive order titled “Removing Barriers to American Leadership in AI,” rolled back several regulatory safeguards introduced during the previous administration.
The United Kingdom performed only slightly better than the USA, scoring 5 out of 14, suggesting its more flexible, innovation-focused approach has yet to translate into detailed legal protections.
Patchy protections worldwide
Researchers said the findings reveal an uneven global regulatory landscape, despite growing concerns over the risks advanced AI systems pose.
These include deepfake videos, biased hiring algorithms, AI-generated explicit imagery and chatbot interactions linked to serious mental health harm.
Even among countries with legislation, the study found significant gaps.
One of the most striking omissions concerns environmental impact. No country was found to have fully addressed the issue, despite the growing energy, water and computing demands associated with training and running large AI models.
The research also highlighted broad exemptions within existing laws.
Of the 33 countries with national AI legislation, 29 allow exceptions for military use and 29 include carve-outs for policing. In practice, that means AI systems restricted for civilian applications may still be used by governments or security agencies.
Leave a comment