🔗 Share this article UK Technology Firms and Child Protection Agencies to Test AI's Ability to Generate Exploitation Content Technology companies and child safety agencies will be granted authority to assess whether artificial intelligence systems can generate child abuse images under new UK laws. Substantial Rise in AI-Generated Illegal Content The declaration coincided with findings from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025. New Regulatory Structure Under the amendments, the authorities will permit approved AI developers and child protection organizations to inspect AI systems – the foundational systems for chatbots and visual AI tools – and verify they have sufficient safeguards to prevent them from creating depictions of child exploitation. "Fundamentally about stopping abuse before it happens," stated Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now identify the risk in AI systems promptly." Addressing Regulatory Obstacles The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI creators and others cannot generate such content as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it. This legislation is designed to preventing that problem by enabling to stop the creation of those materials at their origin. Legislative Structure The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a prohibition on owning, creating or sharing AI models developed to create child sexual abuse material. Practical Consequences This week, the minister visited the London base of Childline and heard a simulated conversation to counsellors involving a report of AI-based abuse. The interaction depicted a adolescent requesting help after facing extortion using a explicit AI-generated image of themselves, constructed using AI. "When I hear about children experiencing blackmail online, it is a cause of extreme anger in me and justified anger amongst parents," he stated. Alarming Statistics A leading internet monitoring foundation reported that instances of AI-generated exploitation content – such as webpages that may contain numerous images – had significantly increased so far this year. Instances of category A content – the gravest form of abuse – increased from 2,621 visual files to 3,086. Female children were predominantly victimized, making up 94% of prohibited AI depictions in 2025 Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025 Industry Response The law change could "represent a crucial step to ensure AI products are secure before they are released," commented the chief executive of the internet monitoring foundation. "Artificial intelligence systems have enabled so victims can be victimised repeatedly with just a simple actions, providing criminals the ability to create possibly limitless amounts of advanced, lifelike child sexual abuse material," she continued. "Material which further exploits survivors' suffering, and renders young people, particularly female children, more vulnerable on and off line." Support Session Data The children's helpline also released details of counselling sessions where AI has been mentioned. AI-related harms mentioned in the sessions comprise: Using AI to evaluate body size, physique and looks AI assistants dissuading children from talking to trusted adults about harm Facing harassment online with AI-generated material Digital blackmail using AI-manipulated pictures During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year. Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellness, including using AI assistants for support and AI therapeutic apps.