Home Artificial intelligence UK researchers launch AI audit tool for non-experts
Artificial intelligence

UK researchers launch AI audit tool for non-experts

Share


‘There is rarely an opportunity for people who will be affected by AI decision-making to help guide their development’


A research project led by the University of Glasgow has launched a free, open-source tool designed to make AI systems more trustworthy by enabling more rigorous audits, including by non-experts.

The PHAWM Workbench is an online tool designed to enable anyone to carry out audits of AI applications. It guides participants through a structured process by which they first understand the AI application through accessible summaries and then define audit criteria by reviewing potential harms or benefits and ways to measure them.

The tool is the first public output of the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, which aims to help organisations, policymakers and individuals identify the potential benefits and harms of AI applications as they become more widespread.

It is designed explicitly for non-experts, with guides and processes explained in non-technical language.

Professor Simone Stumpf of the University of Glasgow’s School of Computing Science said AI applications are already influencing decisions in areas including housing, employment, finance, policing, education and healthcare. Therefore, it’s important that people affected by these decisions can understand and influence the systems making them, particularly as all AI systems suffer from biases and flaws.

“Until now, those audits are usually conducted by people with a deep understanding of the processes which drive AI, but who may lack insight into the social or cultural impacts those systems may create,” she said.

“There is rarely an opportunity for people who will regularly use or will be affected by AI decision-making to help guide their development. Our new workbench tool is designed to help organisations create better, fairer, more transparent AI systems by providing diverse perspectives on AI applications which might otherwise go unexamined.”

By combining the experience of communities and end users affected by AI together with technical assessments, the project aims to ensure that social, cultural and ethical impacts that might otherwise be overlooked are incorporated into the design of AI systems.

PHAWM, launched in 2024 and funded with £3.5 million from Responsible AI UK, brings together more than 30 researchers from seven UK universities along with partner organisations. It aims is to tackle the challenge of developing trustworthy and safe AI systems.



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *