UK Moves to Authorize Pre-Deployment AI Testing for Illegal Sexual Content

UK Moves to Authorize Pre-Deployment AI Testing for Illegal Sexual Content

Published Nov 16, 2025

On 12 November 2025 the UK government filed amendments to the Crime and Policing Bill to designate AI developers and child‐protection organisations (e.g., the Internet Watch Foundation) as “authorised testers” legally permitted to test models for generating CSAM, NCII and extreme pornography and to use a new “testing defence” shielding such tests from prosecution. The change responds to IWF data showing AI‐generated CSAM reports more than doubled (199 in 2024 to 426 in 2025), images of children aged 0–2 rose from 5 to 92, and Category A material increased from 2,621 to 3,086 items (now 56% vs 41% prior year). If enacted, regulators must set authorised‐tester criteria and safeguards; immediate implications include mandated pre‐deployment safety testing by developers, expanded technical roles for NGOs, and new obligations tied to model release.

UK law enables authorised AI testing to prevent illegal child abuse content

What happened

The UK Government filed amendments to the Crime and Policing Bill on 12 November 2025 to allow authorised testing of AI models for their capacity to generate illegal sexual content before deployment. The amendments would designate AI developers and child‐protection organisations (for example, the Internet Watch Foundation) as “authorised testers” and create a legal “testing defence” shielding such testing from prosecution when offences arise only in the course of testing.

New IWF data cited by the government shows reports of AI‐generated child sexual abuse material (CSAM) rose from 199 in 2024 to 426 in 2025; images of children aged 0–2 rose from 5 to 92. Category A material increased from 2,621 to 3,086 items and now accounts for 56% of illegal AI material (up from 41%).

Why this matters

Policy shift — proactive safety checks before launch. This reverses the traditional reactive enforcement model by legally enabling pre‐deployment testing of generative models for CSAM, non‐consensual intimate images (NCII) and extreme pornography (EP). Scale and severity metrics (doubling of reports, surge in infant imagery, and growth in Category A content) provide the stated urgency. Practical consequences in the article include:

  • Developers will likely incorporate authorised testing and safety audits into release processes.
  • NGOs like the IWF may move from takedown roles to proactive technical testers, requiring secure infrastructure and expertise.
  • Regulators must define who qualifies as an authorised tester, the safeguards required, and the scope of the testing defence — raising questions about standards, accountability, and potential impact on lawful expression.
  • Legal risk shifts: authorised testers gain limited immunity, while non‐compliant developers may face enforcement or obligations to remediate models.

Sources

  • UK Government news release summarising the measures and IWF data: https://www.gov.uk/government/news/new-law-to-tackle-ai-child-abuse-images-at-source-as-reports-more-than-double
  • Letter and government amendments for Committee (12 Nov 2025): https://www.gov.uk/government/publications/crime-and-policing-bill-government-amendments-for-committee/letter-from-lord-hanson-to-lord-davies-detailing-government-amendments-for-lords-committee-stage-12-november-2025-accessible
  • ITV coverage on the law change and testing issue: https://www.itv.com/news/2025-11-11/law-change-set-to-allow-ai-testing-to-prevent-creation-of-child-sex-abuse-images
  • Crime and Policing Bill factsheet (child sexual abuse material): https://www.gov.uk/government/publications/crime-and-policing-bill-2025-factsheets/crime-and-policing-bill-child-sexual-abuse-material-factsheet

Sharp Increase in AI-Generated CSAM Reports and Illegal Material in 2025

  • AI-generated CSAM reports — 426 reports (2025; +227, +114.07% vs 2024; IWF, UK)
  • AI-generated CSAM images of children aged 0–2 — 92 images (2025; +87, +1,740% vs 2024; IWF, UK)
  • Category A illegal AI material — 3,086 items (2025; +465, +17.74% vs 2024; IWF, UK)
  • Category A share of illegal AI material — 56% (2025; +15 pp vs 2024; IWF, UK)

Navigating AI Testing Risks: Compliance, Security, and Regulatory Uncertainty Challenges

  • Bold risk name: Pre-deployment compliance and release risk. UK amendments (12 Nov 2025) create authorised testing and a “testing defence,” shifting liability upstream; models may face audits before launch, and non-compliant developers risk injunctions or forced remediation. Why it matters: IWF reports of AI-generated CSAM doubled (199→426) and Category A now 56%, increasing scrutiny. Opportunity: build safety-by-design pipelines and partner with authorised testers to de-risk launches; developers and regulators benefit.
  • Bold risk name: Secure handling of test-generated contraband. Authorised testing will intentionally generate/possess CSAM/NCII/EP, raising leakage, re-identification, and insider-threat risks, with severe legal and reputational fallout if mishandled. Why it matters: the bill’s defence applies only within testing; NGOs will need technical capacity and secure operations. Opportunity: invest in hardened test environments, strict data governance, and funding for NGOs to professionalize security; developers, IWF and similar bodies benefit.
  • Bold risk name: Known unknown: Authorisation criteria, testing standards, and scope limits. It is unclear who qualifies as an “authorised tester,” what metrics/thresholds apply, and the bounds of the testing defence; inclusion of NCII/EP raises free-expression overreach risks and international spillover could fragment requirements. Why it matters: uncertainty complicates product planning and cross-border compliance. Opportunity: engage early in standard-setting to shape practical rules and align internal processes; industry bodies, developers, and regulators benefit.

Key 2025-2026 Milestones Shaping AI Testing and Safety Regulations

PeriodMilestoneImpact
Q4 2025 (TBD)UK Parliament decides on Crime and Policing Bill AI testing amendments.If approved, testing defence becomes law, enabling proactive model checks.
Q4 2025 (TBD)Designated Authority/regulators publish criteria and safeguards to qualify authorised testers.Defines accreditation, safeguards, scope for IWF and AI developers' testing.
Q1 2026 (TBD)Regulators issue technical specs: testing parameters, thresholds, and metrics.Standardizes pre-deployment safety tests for CSAM/NCII/EP across modalities.
Q1 2026 (TBD)Initial authorised testers formally designated (e.g., IWF, selected AI developers).Begins operational testing; shifts to proactive evaluation before model deployment.
Q1 2026 (TBD)Guidance on data stewardship for test-generated harmful content published.Sets secure handling, access, deletion protocols; reduces legal risk.

Prevention by Permission: Will Legal AI Testing Reduce Online Child Abuse Content?

Supporters frame the UK’s move as a necessary inversion of the status quo: stark IWF figures—AI‐generated CSAM reports more than doubling, depictions of 0–2‐year‐olds spiking from 5 to 92, and the most severe material rising to 56%—justify shifting safety checks upstream and removing the legal handcuffs that have blocked developers from testing. Skeptics counter that authorising probes for CSAM, NCII and extreme pornography risks sanctioned leakage and chilled expression, especially while it’s unclear who will qualify as an “authorised tester,” how test artefacts will be secured, or where the new “testing defence” begins and ends. To stop the worst images, the state will permit their creation—under license. Are we sure that line will hold? Even pragmatists flag the implementation burden on NGOs, uncertain metrics, and the possibility that a UK template could trigger either regulatory harmony or fragmentation abroad.

The counterintuitive takeaway is that reducing illegal AI content may require a tightly circumscribed right to create it first: a small, explicit permission meant to prevent large, implicit harms. If the bill passes and a designated authority defines parameters, stewardship rules, and qualifications, developers will build constraints before release, NGOs will pivot from reactive removal to proactive evaluation, and product launches will be reorganised around permissioned safety checks. Watch the bill’s final text, the scope and limits of the testing defence, data‐handling protocols for test outputs, and whether other jurisdictions copy or diverge. What shifts next isn’t just compliance; it’s who holds risk, when, and for how long—and whether prevention by permission can earn public trust.