Media & Culture

Internet Watch Foundation finds 260-fold increase in AI-generated CSAM in just one year, and "it’s the tip of the iceberg"

AI-generated child abuse videos exploded from 13 to 3,443 in one year, signaling a crisis.

Deep Dive

A new report from the Internet Watch Foundation (IWF), Europe's largest hotline for combating online child sexual abuse imagery, reveals a staggering 260-fold increase in AI-generated child sexual abuse material (CSAM) in just one year. The data shows a jump from only 13 videos in 2024 to 3,443 videos in 2025, indicating a rapid and alarming escalation in the volume of synthetic harmful content. Researchers warn this documented surge represents only the 'tip of the iceberg' of what is being created and shared, as these numbers reflect only what has been detected or proactively reported.

Experts from organizations like Thorn, a nonprofit that builds technology to fight online child exploitation, state that generative AI is not just producing more harmful content but is fundamentally transforming the threat landscape. The technology is changing the methods used to target children, exacerbating the revictimization of survivors through the creation of new synthetic imagery, and overwhelming the capacity of investigators and content moderators. This crisis complicates existing efforts to scrub such material from the internet and presents new, complex challenges for law enforcement and tech platforms tasked with detection and removal.

Key Points
  • IWF found a 260-fold year-over-year increase in AI-generated CSAM videos, from 13 to 3,443.
  • Experts from Thorn warn the reported numbers are just the 'tip of the iceberg' of detected content.
  • Generative AI is changing how children are targeted and overwhelming investigative and moderation systems.

Why It Matters

The explosive growth of synthetic abuse material overwhelms enforcement, revictimizes survivors, and demands urgent new detection solutions.