Enterprise & Industry

Microsoft has a new plan to prove what’s real and what’s AI online

The tech giant evaluated 60 combinations of digital provenance techniques to combat interactive deepfakes.

Deep Dive

Microsoft's AI safety research team published a technical blueprint for verifying online content authenticity, evaluating 60 different combinations of digital provenance methods against modern threats like interactive deepfakes. The framework recommends standards for AI companies and platforms, using approaches similar to art authentication—provenance tracking, invisible watermarks, and mathematical signatures. This responds to new legislation and aims to help users distinguish real content from AI-generated manipulation across social media and professional networks.

Why It Matters

As AI deception spreads, this framework could become the standard for verifying content on social platforms and professional networks.