AI-specific regulation backed by four in five UK adults, survey shows

The Internet Watch Foundation (IWF) has reported a significant increase in AI-generated child sexual abuse material (CSAM), with 8,029 images and videos assessed in 2025, highlighting the growing sophistication and accessibility of such content. The IWF’s report, “Harm without limits,” emphasizes the severe human impact of AI CSAM, detailing how generative models can re-victimize survivors and fuel harmful behaviors. The report notes that 65% of identified videos were classified as Category A, the most extreme classification, and that AI chatbots facilitating simulated abuse scenarios are accessible on the clear web. Furthermore, advancements in AI have led to the convergence of tools that simplify the creation of abusive imagery, significantly lowering barriers to entry for potential offenders. The IWF’s previous reports indicate a 380% rise in actionable AI-generated reports from 2024, marking a troubling trend in the evolution of AI abuse.

Click here for the official article/release

Disclaimer

The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.

AI was used to generate part or all of this content - more information

Also Read:  Microsoft Vows to Shield Clients from AI-Generated Copyright Disputes