Media & Culture

EU backs nude app ban and delays to landmark AI rules

Key compliance deadlines for high-risk AI systems pushed to 2027, watermarking rules delayed until 2026.

Deep Dive

The European Parliament has voted to significantly delay the implementation of its landmark AI Act, pushing back key compliance deadlines by over a year. The vote, passed by a large majority, postpones rules for 'high-risk' AI systems—those posing serious threats to health, safety, or fundamental rights—until December 2027. Companies developing AI for regulated sectors like medical devices or toys get an even longer runway, with a new deadline of August 2028. Crucially, the requirement for providers to watermark AI-generated content, originally set for this August, is now delayed until November 2026.

In a parallel move, lawmakers backed proposals to explicitly ban 'nudify' apps within the revised Act. This follows widespread public and political outrage in the EU over a flood of sexualized deepfakes generated by Grok and shared on X earlier this year. The proposed ban would not apply to AI systems with 'effective safety measures' preventing the creation of such non-consensual imagery.

The vote extends a period of regulatory uncertainty for businesses across the continent, which have already faced delays due to the EU missing its own deadlines for publishing essential guidance. The parliament must now negotiate the final text of these amendments with the European Council, a body of ministers from all 27 member states. It remains unclear if these changes can be finalized before the original August 2024 enforcement date, as the parliament cannot unilaterally alter European law.

Key Points
  • High-risk AI system compliance deadlines pushed from 2024 to December 2027.
  • Watermarking rules for AI-generated content delayed from August 2024 to November 2026.
  • Parliament backed a ban on 'nudify' apps, a direct response to the Grok/X deepfake scandal.

Why It Matters

Creates extended regulatory uncertainty for AI companies in Europe and signals a targeted crackdown on harmful generative AI applications.