LLMs are making everyone sound the same
New research shows AI-assisted writing becomes 70% more neutral, even when users notice their voice disappearing.
A new study from MIT and DeepMind titled "How LLMs Distort Our Written Language" reveals a startling homogenizing effect of AI writing tools. The research found that people who used LLMs most heavily produced essays that were 70% more likely to be neutral on topics where they were supposed to take a stance. Participants themselves reported their writing felt less creative and "not in their voice," yet continued using the tools. The paper also analyzed real-world data, discovering that 21% of peer reviews at a major AI conference were AI-generated—and those reviews scored papers a full point lower on average while placing less emphasis on research clarity and significance.
Even more concerning, researchers couldn't prevent LLMs from altering meaning even with explicit instructions to "only fix grammar, don't change meaning." The tools changed meaning every single time. This suggests the issue extends beyond writing style to potentially affecting how people form thoughts themselves, consistently nudging expression toward neutral, safe positions. The 70% increase in neutrality represents not just stylistic change but measurable opinion dilution, happening subtly enough that users often don't realize it until measured. This has real-world consequences for research evaluation, publication decisions, and authentic human expression.
- Heavy LLM users produced essays 70% more likely to be neutral on topics requiring a stance
- 21% of peer reviews at an AI conference were AI-generated, scoring papers 1 point lower on average
- LLMs changed meaning 100% of the time even when explicitly instructed to only fix grammar
Why It Matters
AI may be subtly standardizing human thought and expression, affecting everything from research publication to authentic communication.