Graph Propagated Projection Unlearning: A Unified Framework for Vision and Audio Discriminative Models
A new unified framework erases specific learned information from AI models with unprecedented speed and efficiency.
Researchers Shreyansh Pathak and Jyotishman Das have introduced Graph-Propagated Projection Unlearning (GPPU), a novel and scalable algorithm designed to selectively erase learned information from deep neural networks. This addresses a growing need for privacy, regulatory compliance (like 'right to be forgotten' laws), and adaptive AI systems. GPPU operates as a unified framework, meaning it works across different data modalities—specifically vision and audio discriminative models. Its core innovation lies in using graph-based propagation to efficiently pinpoint the exact feature-space directions associated with a target class (like a specific person's face or a copyrighted song) that needs to be forgotten.
The algorithm then projects the model's representations onto an orthogonal subspace, effectively scrubbing out the targeted information, and follows this with a minimal, targeted fine-tuning step to stabilize performance. This two-step process ensures the information is effectively and irreversibly removed. The researchers conducted comprehensive evaluations across a wide scale, testing on six vision datasets and two large-scale audio benchmarks, and applying the method to various architectures including Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), and Audio Transformers. The results demonstrate that GPPU is not only effective but remarkably efficient, achieving 10-20x speedups compared to previous unlearning methodologies while successfully maintaining the model's accuracy on all the classes it is meant to retain.
- Unified framework works across vision (CNNs, ViTs) and audio (Audio Transformers) models, a modality-agnostic approach.
- Achieves 10-20x faster unlearning speeds than prior methods, as validated on six vision and two audio datasets.
- Uses graph propagation to find class-specific features, then projects them away followed by targeted fine-tuning for irreversible removal.
Why It Matters
Enables practical compliance with data privacy laws and allows AI models to be efficiently updated or corrected without full retraining.