Sharpness-Aware Poisoning: Enhancing Transferability of Injective Attacks on Recommender Systems
New method targets worst-case victim models to boost attack success across systems
Researchers Junsong Xie, Yonghui Yang, Pengyang Shao, and Le Wu from Hefei University of Technology have introduced SharpAP (Sharpness-Aware Poisoning), a new attack method targeting recommender systems (RS). Recommender systems are vulnerable to injective attacks, where attackers inject limited fake user profiles to promote target items for unethical gains like economic or political advantages. Traditional methods rely on a fixed surrogate model to mimic potential victim models, but this assumption fails when structural discrepancies exist between surrogate and victim models, leading to poor attack transferability.
SharpAP addresses this by employing sharpness-aware minimization to identify the approximately worst-case victim model within the large space of possible models. It formulates the attack as a min-max-min tri-level optimization problem, iteratively optimizing poisoned data specifically for this worst-case model. This generates more robust poisoned data less sensitive to model structure shifts, mitigating overfitting to the surrogate. Comprehensive experiments on three real-world datasets demonstrate that SharpAP significantly enhances attack transferability, making it a potent tool for understanding and defending against RS vulnerabilities.
- SharpAP uses sharpness-aware minimization to find worst-case victim models, improving attack transferability across structurally different models
- Formulated as a min-max-min tri-level optimization problem for robust poisoned data generation
- Experiments on three real-world datasets show significant enhancement in attack transferability over existing methods
Why It Matters
Exposes a critical RS security flaw, enabling more effective defenses against unethical content promotion attacks.