British Technology Firms and Child Safety Agencies to Examine AI's Capability to Create Exploitation Content

Technology companies and child safety agencies will receive permission to assess whether AI systems can produce child abuse images under new UK legislation.

Substantial Increase in AI-Generated Harmful Material

The declaration came as findings from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the changes, the government will permit approved AI developers and child protection groups to inspect AI models – the foundational technology for chatbots and visual AI tools – and ensure they have adequate safeguards to prevent them from creating images of child exploitation.

"Ultimately about stopping exploitation before it happens," declared Kanishka Narayan, noting: "Experts, under rigorous conditions, can now detect the danger in AI models promptly."

Tackling Regulatory Obstacles

The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is designed to averting that issue by helping to stop the production of those materials at their origin.

Legal Structure

The changes are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or sharing AI systems developed to create exploitative content.

Real-World Impact

This recently, the minister visited the London headquarters of a children's helpline and listened to a simulated conversation to counsellors involving a account of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a sexualised deepfake of themselves, created using AI.

"When I learn about young people facing blackmail online, it is a cause of extreme anger in me and rightful anger amongst families," he said.

Alarming Data

A prominent online safety foundation stated that instances of AI-generated exploitation material – such as webpages that may include multiple files – had more than doubled so far this year.

Instances of the most severe content – the most serious form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a vital step to guarantee AI products are secure before they are released," commented the head of the online safety organization.

"AI tools have made it so survivors can be targeted all over again with just a simple actions, giving offenders the capability to create possibly limitless quantities of advanced, lifelike child sexual abuse material," she added. "Content which additionally exploits survivors' trauma, and makes young people, especially female children, more vulnerable both online and offline."

Support Interaction Information

Childline also released details of support interactions where AI has been referenced. AI-related harms discussed in the conversations comprise:

  • Employing AI to rate weight, physique and looks
  • AI assistants dissuading young people from consulting safe adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated pictures

During April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and related terms were mentioned, four times as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including using chatbots for support and AI therapeutic apps.

Suzanne Pope
Suzanne Pope

Elara is a wellness coach and writer passionate about helping others find balance and purpose through mindful living and self-reflection.