UK Technology Companies and Child Safety Officials to Examine AI's Ability to Generate Abuse Images
Tech firms and child safety agencies will receive authority to evaluate whether artificial intelligence systems can produce child exploitation material under new British legislation.
Substantial Rise in AI-Generated Illegal Material
The announcement came as findings from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the government will allow approved AI developers and child safety groups to inspect AI systems – the underlying technology for chatbots and visual AI tools – and ensure they have adequate safeguards to prevent them from producing images of child sexual abuse.
"Fundamentally about preventing abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous protocols, can now identify the risk in AI models promptly."
Tackling Regulatory Obstacles
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot create such content as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.
This law is aimed at preventing that problem by helping to halt the creation of those images at source.
Legal Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a ban on owning, producing or sharing AI models designed to generate exploitative content.
Real-World Consequences
This recently, the official toured the London base of a children's helpline and listened to a simulated conversation to counsellors involving a report of AI-based abuse. The call depicted a teenager requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about young people facing blackmail online, it is a cause of extreme anger in me and justified concern amongst families," he said.
Concerning Statistics
A leading internet monitoring organization stated that instances of AI-generated abuse material – such as webpages that may include numerous files – had significantly increased so far this year.
Cases of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, making up 94% of prohibited AI images in 2025
- Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a vital step to ensure AI products are secure before they are launched," commented the head of the internet monitoring foundation.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a few clicks, providing criminals the capability to create possibly endless amounts of advanced, photorealistic exploitative content," she added. "Material which further exploits survivors' suffering, and makes young people, particularly female children, less safe on and off line."
Support Session Information
The children's helpline also published details of support sessions where AI has been mentioned. AI-related harms discussed in the sessions comprise:
- Employing AI to evaluate body size, body and appearance
- AI assistants discouraging young people from talking to trusted guardians about abuse
- Being bullied online with AI-generated content
- Online blackmail using AI-manipulated images
Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and related topics were discussed, four times as many as in the same period last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellness, including utilizing AI assistants for assistance and AI therapeutic apps.