Home Artificial intelligence Our laws aren’t keeping up with AI and deepfake abuse in the classroom
Artificial intelligence

Our laws aren’t keeping up with AI and deepfake abuse in the classroom

Share


It is happening now, and our legal and educational systems are struggling to keep up.

Creating synthetic imagery of children has become readily accessible. Free or low-cost AI tools, once requiring technical expertise, now operate through simple interfaces. A perpetrator needs only photographs – often harvested from social media, school websites or sports team pages – and basic software to generate explicit deepfakes within minutes.

Young people are particularly vulnerable. Their digital footprints are extensive, their images widely shared and their understanding of these risks often minimal.

The images can circulate permanently online and be weaponised for bullying, sextortion or grooming. For young people, whose identities and reputations are forming, deepfake abuse can be catastrophic.

Safeguarding laws not fit for AI world

Current legislation is struggling because it was designed for a pre-AI world. While the Protection of Children Act 1978 makes it an offence to create, possess or distribute AI-generated indecent images, the law does not explicitly address AI models that are specifically trained to generate sexual abuse imagery.

The Online Safety Act 2023 offers some protection, requiring platforms to prevent child sexual abuse content. However, enforcement mechanisms are still developing and the Act’s scope doesn’t fully address the unique characteristics of synthetic imagery.

The Crime and Policing Bill is taking steps in the right direction, with four measures targeting AI imagery:

Introducing a new offence that makes it illegal to adapt, possess or distribute AI models designed to create child sexual abuse imagery.

Expanding the definition of child sexual abuse imagery to include AI-created content.

Expanding the current provision in the Serious Crime Act 2015, which makes it illegal to possess child sexual abuse imagery to include AI-generated images.

Criminalising those who maintain or control websites distributing child sexual abuse imagery and those providing access to such platforms.

How the education system can address AI-generated abuse

Schools should act now to address the reality of AI-generated abuse imagery. This should begin with training for designated safeguarding leads to ensure they understand AI generated imagery risks, detection methods and trauma-informed responses.

All staff should then be trained to understand the risks, and be clear on what they should do if they have concerns or if a child has made a disclosure to them.

Students also need educating on the risks and how to keep themselves safe. They need to understand how their images can be weaponised, how to audit their digital footprint and how to use privacy settings effectively. They must also understand that creating, sharing, or threatening to share deepfakes is a serious crime.

Without proactive action, the education system risks falling behind on deepfakes, an issue causing significant challenges throughout society and where the law simply isn’t keeping up with the pace of technological advancement.

__________________
Dai Durbridge, partner in the education safeguarding team at UK and Ireland law firm Browne Jacobson

LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.

The views expressed are those of the authors and do not necessarily reflect the official LBC position.

To contact us email opinion@lbc.co.uk



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *