Media & Culture

One of Grammarly’s ‘experts’ is suing the company over its identity-stealing AI feature

Journalist Julia Angwin files class-action after discovering her identity was used without consent.

Deep Dive

Journalist Julia Angwin has filed a class-action lawsuit against Grammarly, alleging the company's AI 'Expert Review' feature used her identity and likeness without consent. The feature, which provided writing suggestions attributed to real experts, was discovered to include several journalists and academics, including Verge editor-in-chief Nilay Patel and reporter Casey Newton. Angwin's complaint, filed in federal court, claims Grammarly violated laws against using someone's identity for commercial purposes without permission, specifically citing privacy and publicity rights violations.

Grammarly CEO Shishir Mehrotra announced the company is disabling the 'Expert Review' feature following the controversy. The company had initially created an opt-out email system for affected individuals, but the lawsuit prompted immediate action. This case represents one of the first major legal challenges to how AI companies use real people's identities in their products, potentially setting precedent for future litigation around AI training data and digital likeness rights.

The lawsuit comes as AI companies face increasing scrutiny over their data sourcing practices. Grammarly's 'Expert Review' was designed to help users discover influential perspectives, but the implementation raised significant ethical questions about consent and compensation. Legal experts suggest this case could influence how AI companies approach identity usage, potentially requiring explicit licensing agreements similar to those used in traditional media and advertising industries.

Key Points
  • Journalist Julia Angwin filed class-action lawsuit against Grammarly for unauthorized use of her identity in AI feature
  • Grammarly's 'Expert Review' feature used real writers' names and likenesses without consent, including Verge staff members
  • Company disabled the feature after lawsuit and created opt-out system, CEO apologized and promised to 'rethink our approach'

Why It Matters

Sets precedent for AI identity usage rights, potentially requiring explicit consent and compensation for digital likenesses in AI products.