🔗 Share this article British Tech Firms and Child Safety Officials to Examine AI's Capability to Generate Exploitation Images Technology companies and child protection organizations will receive authority to evaluate whether artificial intelligence tools can generate child exploitation material under recently introduced British laws. Significant Increase in AI-Generated Illegal Content The declaration coincided with findings from a safety watchdog showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025. Updated Legal Framework Under the amendments, the authorities will permit designated AI developers and child protection groups to inspect AI systems – the foundational technology for chatbots and image generators – and verify they have sufficient safeguards to prevent them from creating images of child exploitation. "Fundamentally about preventing abuse before it occurs," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now detect the danger in AI systems promptly." Addressing Regulatory Challenges The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it. This law is aimed at preventing that problem by enabling to stop the creation of those images at their origin. Legal Framework The changes are being added by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI models designed to create exploitative content. Practical Consequences This week, the official visited the London base of a children's helpline and listened to a mock-up call to counsellors involving a account of AI-based exploitation. The call portrayed a teenager seeking help after facing extortion using a sexualised deepfake of themselves, constructed using AI. "When I hear about children facing extortion online, it is a source of extreme anger in me and justified concern amongst families," he said. Alarming Data A leading online safety foundation reported that instances of AI-generated exploitation content – such as online pages that may include multiple images – had significantly increased so far this year. Cases of category A material – the most serious form of exploitation – rose from 2,621 visual files to 3,086. Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025 Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025 Sector Response The law change could "represent a vital step to guarantee AI tools are secure before they are released," commented the chief executive of the internet monitoring foundation. "Artificial intelligence systems have enabled so survivors can be targeted all over again with just a simple actions, giving offenders the capability to make possibly endless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further commodifies survivors' suffering, and renders young people, particularly girls, less safe both online and offline." Support Interaction Data The children's helpline also released details of support interactions where AI has been referenced. AI-related risks mentioned in the sessions include: Using AI to rate weight, physique and appearance Chatbots discouraging young people from consulting trusted adults about harm Facing harassment online with AI-generated content Digital extortion using AI-manipulated images Between April and September this year, Childline delivered 367 support sessions where AI, conversational AI and related terms were discussed, significantly more as many as in the same period last year. Half of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellness, encompassing using AI assistants for assistance and AI therapeutic applications.