Media & Culture

Grammarly says it will stop using AI to clone experts without permission

The feature used public data to mimic real journalists and editors without their consent.

Deep Dive

Grammarly, the AI writing assistant owned by Superhuman, has abruptly shut down its controversial 'Expert Review' feature after a public backlash. The tool, launched in August, used third-party large language models (LLMs) to analyze publicly available information and generate writing suggestions that were presented as being 'inspired by' the work of specific, real-world experts. This list included prominent journalists like The Verge's editor-in-chief, who were never consulted or compensated. The company initially responded by creating an opt-out email for writers but has now conceded that approach was insufficient, leading to the feature's full disablement.

In a public apology on LinkedIn, Superhuman CEO Shishir Mehrotra stated the company 'clearly missed the mark' and is now committed to a complete rethink. The new plan is to 'reimagine' the feature with a core principle of expert consent and control. Mehrotra outlined a future vision where experts could choose to participate, shape how their knowledge is represented, and even control a potential business model, turning the platform from a single AI sidekick into a 'whole team' of specialized agents. This incident highlights the growing ethical and legal tensions in the AI industry surrounding the use of personal data, style, and intellectual property to train commercial models without explicit permission.

Key Points
  • Grammarly's 'Expert Review' agent cloned writing styles from real experts using public data and third-party LLMs without consent.
  • Superhuman CEO Shishir Mehrotra apologized, stating the company 'missed the mark' and has fully disabled the feature for a redesign.
  • The new approach promises experts control over participation, representation, and business models in future AI integrations.

Why It Matters

Sets a precedent for AI ethics, forcing companies to obtain consent before using personal style or data to train commercial models.