Enterprise & Industry

Abuse survivor Pelicot says mindsets need to change in fight against sexual violence

New AI models are creating hyper-realistic non-consensual imagery, sparking a global safety crisis.

Deep Dive

The proliferation of accessible AI image generators has led to a 300% increase in reported cases of AI-generated non-consensual intimate imagery (deepfakes) in the past year. Tools like Midjourney v6 and Stable Diffusion 3, while having safeguards, are being circumvented to create photorealistic abuse material. This technological abuse mirrors the trauma of real-world sexual violence, with victims reporting severe psychological harm. Law enforcement and platforms are struggling to keep pace with the scale of automated content creation.

Why It Matters

As AI tools become more powerful and widespread, the risk of them being weaponized for harassment and abuse grows exponentially, demanding urgent ethical and technical guardrails.