Home Artificial intelligence UK regulators need more resources to tackle AI
Artificial intelligence

UK regulators need more resources to tackle AI

Share


Gaps in funding and coordination risk leaving rights unprotected


A lack of resources, not a lack of legal powers, is undermining the UK’s ability to regulate AI, MPs and peers have been told.

This week senior regulators warned a meeting of Parliament’s Joint Committee on Human Rights that existing laws could address many of the harms linked to AI, but funding shortages and fragmented oversight were preventing effective enforcement.

More than a dozen regulators in the UK have responsibilities that touch on artificial intelligence, yet none has a single, dedicated mandate for the technology.

The government has argued that the current regulatory framework is sufficient, but officials from several watchdogs say the system risks falling behind fast-moving developments without stronger coordination and investment.

Mary-Ann Stephenson, chair of the Equality and Human Rights Commission (EHRC), said financial constraints were the biggest obstacle.

“There is a great deal more that we would like to do in this area if we had more resources,” she told the committee.

She noted that the EHRC’s budget has remained unchanged at £17.1m since 2012, the minimum level then required to carry out its statutory duties.

This amounts to a 35% reduction in real terms when adjusted for inflation.

Regulators told the committee that existing legislation, including the Equality Act, already provides tools to tackle AI-related discrimination and rights violations. However, limited capacity means enforcement is often reactive, taking place after harm has occurred.

Stephenson said the government should prioritise funding existing regulators and helping them to work more closely together.

That would allow regulators to “respond swiftly when gaps are identified,” she said.

Andrew Breeze, Ofcom’s director for online safety technology policy, warned that regulation was struggling to keep pace with rapid advances in AI.

He said regulators’ powers are largely limited to how technologies are used, rather than the systems themselves, and none of the main watchdogs have the power to approve or reject AI products before they reach the market.

Some members of the committee suggested the UK should consider creating a dedicated AI regulator.

Labour peer Baroness Chakrabarti compared the technology to the pharmaceutical sector.

“We would not dream of not having a specific medicines regulator,” she said.

“AI is capable of enormous good, but also enormous harm.”

Regulators, however, favoured a coordinating body rather than a single “super-regulator”, arguing that AI is a general-purpose technology best overseen by sector-specific authorities.

They pointed to the Digital Regulation Cooperation Forum, formed in 2020, as an early example of joint working between agencies.

Elizabeth Denham, the former information commissioner, told the committee that stronger information-sharing powers and the ability to conduct compulsory audits would prevent large technology companies from exploiting gaps between regulators.

Breeze also called for greater international cooperation, particularly to tackle AI-generated disinformation. He pointed out that the Online Safety Act does not give Ofcom the authority to regulate legal but harmful misinformation, except where children are affected.

Meanwhile, civil liberties groups warned the committee that the government’s approach could leave people exposed to new risks.

Silkie Carlo, director of Big Brother Watch, said recent policies had weakened protections against automated decision-making.

She warned that AI-enabled mass surveillance could “spiral out of control”, with systems built for one purpose easily repurposed for another.



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *