Federated Learning and Unlearning for Recommendation with Personalized Data Sharing
New AI research enables users to trade data for better recommendations, then delete it later without retraining.
A research team led by Liang Qu has introduced FedShare, a novel framework that fundamentally rethinks privacy in federated recommender systems (FedRS). Unlike traditional FedRS that enforce strict local data storage for all users, FedShare introduces personalized data sharing, allowing users to voluntarily contribute portions of their interaction data to a central server in exchange for higher-quality recommendations. The system constructs a server-side user-item graph from this shared data and employs contrastive learning to align local and global user representations, boosting model performance for participating users.
The framework's breakthrough feature is its integrated 'unlearning' capability, addressing a critical gap in existing personalized FedRS. When a user requests to revoke previously shared data, FedShare's contrastive unlearning mechanism selectively removes the influence of that data from the trained model. It achieves this using a small number of historical embedding snapshots, bypassing the need to store massive historical gradient information. This approach slashes storage overhead by 60-80% compared to state-of-the-art federated unlearning baselines, as validated in extensive experiments on three public datasets. FedShare maintains robust recommendation accuracy in both learning and unlearning phases, proving that dynamic privacy preferences and efficient data removal can coexist without sacrificing system utility.
- Enables personalized data sharing: Users can trade specific amounts of interaction data for better recommendation quality, moving beyond 'all-or-nothing' privacy.
- Supports data 'unsharing': Users can later request removal of their shared data and its influence on the model, a feature absent in prior FedRS research.
- Reduces storage overhead by 60-80%: Uses embedding snapshots instead of storing full gradient history, making federated unlearning practically feasible.
Why It Matters
This bridges the gap between user privacy and model performance, enabling practical 'right to be forgotten' in AI-powered recommendation systems.