British Tech Firms and Child Safety Agencies to Examine AI's Capability to Create Abuse Content

Tech firms and child safety agencies will be granted authority to evaluate whether artificial intelligence tools can produce child abuse images under recently introduced UK laws.

Substantial Rise in AI-Generated Illegal Material

The announcement coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the authorities will permit designated AI developers and child safety organizations to inspect AI models – the foundational systems for chatbots and visual AI tools – and verify they have sufficient protective measures to prevent them from creating depictions of child exploitation.

"Fundamentally about stopping abuse before it occurs," declared the minister for AI and online safety, noting: "Specialists, under strict protocols, can now detect the danger in AI systems promptly."

Addressing Regulatory Obstacles

The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and others cannot create such content as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is designed to averting that issue by enabling to halt the production of those materials at source.

Legal Framework

The amendments are being added by the authorities as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI systems developed to create exploitative content.

Practical Consequences

This recently, the minister visited the London headquarters of a children's helpline and listened to a mock-up call to advisors featuring a report of AI-based abuse. The interaction portrayed a adolescent seeking help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.

"When I learn about young people experiencing extortion online, it is a source of intense frustration in me and rightful concern amongst families," he stated.

Alarming Statistics

A leading internet monitoring organization stated that cases of AI-generated exploitation content – such as online pages that may include numerous files – had significantly increased so far this year.

Instances of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI images in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "constitute a vital step to ensure AI products are secure before they are launched," commented the chief executive of the internet monitoring foundation.

"AI tools have enabled so survivors can be victimised all over again with just a few clicks, providing offenders the ability to make possibly limitless amounts of advanced, lifelike child sexual abuse material," she continued. "Material which further commodifies survivors' trauma, and makes children, particularly girls, less safe on and off line."

Support Session Information

Childline also published information of counselling sessions where AI has been referenced. AI-related risks discussed in the sessions include:

  • Using AI to evaluate weight, physique and looks
  • Chatbots discouraging children from consulting trusted guardians about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-faked pictures

During April and September this year, Childline conducted 367 support interactions where AI, conversational AI and related terms were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were connected with mental health and wellbeing, including utilizing chatbots for assistance and AI therapy applications.

Edward Carpenter
Edward Carpenter

A seasoned gaming analyst with over a decade of experience in reviewing online casinos and slots across the UK.

Popular Post