AI Safety

NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey

Analysis of 203 papers shows transformer models have 1-23% privacy vulnerability gaps in social media tasks.

Deep Dive

Researchers Dhiman Goswami, Jai Kruthunz Naveen Kumar, and Sanchari Das published the NLP-PRISM survey analyzing 203 papers on AI privacy risks in social media. Their framework identifies vulnerabilities across six dimensions, finding transformer models achieve 0.58-0.84 F1-scores but suffer 1-23% performance drops with privacy protections. The study reveals major gaps in six key NLP tasks and shows membership inference attacks achieve 0.81 AUC against current models.

Why It Matters

Every company using social media AI must address these vulnerabilities to prevent data leaks and regulatory violations.