UK Technology Firms and Child Protection Officials to Examine AI's Capability to Generate Abuse Content

Tech firms and child protection organizations will be granted permission to evaluate whether AI tools can generate child abuse material under new UK laws.

Substantial Increase in AI-Generated Illegal Material

The announcement came as revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the changes, the government will permit approved AI companies and child protection organizations to examine AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have sufficient protective measures to stop them from producing depictions of child exploitation.

"Ultimately about preventing abuse before it happens," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the danger in AI models early."

Addressing Regulatory Challenges

The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This law is aimed at averting that issue by enabling to halt the production of those materials at source.

Legal Framework

The changes are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a ban on possessing, producing or sharing AI models designed to generate exploitative content.

Practical Consequences

This week, the minister toured the London base of Childline and listened to a simulated call to counsellors featuring a account of AI-based abuse. The interaction portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.

"When I hear about children experiencing extortion online, it is a cause of intense anger in me and justified anger amongst families," he said.

Concerning Data

A prominent internet monitoring organization reported that instances of AI-generated exploitation content – such as online pages that may include numerous images – had significantly increased so far this year.

Cases of category A content – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of prohibited AI images in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to ensure AI tools are secure before they are launched," stated the chief executive of the online safety foundation.

"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a few clicks, giving criminals the capability to make potentially limitless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further commodifies survivors' trauma, and makes young people, especially girls, more vulnerable on and off line."

Counseling Session Information

The children's helpline also published details of support interactions where AI has been referenced. AI-related risks discussed in the conversations comprise:

  • Employing AI to evaluate body size, body and looks
  • Chatbots discouraging children from talking to trusted adults about abuse
  • Being bullied online with AI-generated material
  • Digital extortion using AI-manipulated images

During April and September this year, Childline conducted 367 support sessions where AI, conversational AI and related topics were mentioned, four times as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including utilizing chatbots for assistance and AI therapeutic applications.

Linda Kelly
Linda Kelly

A tech enthusiast and gaming aficionado with over a decade of experience in digital media and content creation.