Grammarly’s sloppelganger saga
Grammarly's AI feature generated writing advice under the names of real experts, including deceased professors and Verge journalists, without their permission.
Grammarly, which rebranded to Superhuman in late 2025, quietly launched an AI-powered 'Expert Review' feature in August of that year. The tool promised insights from leading professionals and would generate writing suggestions 'inspired by' famous names like Stephen King, Neil deGrasse Tyson, and Carl Sagan, displaying them with a checkmark icon. A small disclaimer noted the experts were not affiliated with Grammarly. The feature flew under the radar until March 2026, when Wired reported it was using names of deceased professors, and The Verge discovered it was attributing generic, often poor-quality advice to its own journalists without their permission.
When confronted, Superhuman's VP of product marketing stated the experts appeared because their 'published works are publicly available.' However, the feature's source links were often broken or irrelevant. Following The Verge's report, Grammarly first responded by launching an email inbox for experts to opt-out—a move criticized for placing the burden on the impersonated individuals. Facing mounting public and professional backlash, the company pivoted the next day and announced it would disable the controversial 'Expert Review' feature entirely.
- The 'Expert Review' feature generated AI writing advice attributed to real experts like Stephen King and Verge journalists without their consent or knowledge.
- Advice was often generic and unhelpful (e.g., calling for 'urgency' and 'intrigue'), and source citations were frequently broken or unrelated.
- After media exposure, Grammarly's response evolved from creating an expert opt-out email to fully disabling the feature within days.
Why It Matters
This saga highlights critical ethical and legal pitfalls for AI companies using personal identities and the rapid reputational damage from poorly implemented features.