AI Safety

Don't Let LLMs Write For You

A viral essay argues that LLM-generated prose triggers an immediate 'stop reading' reflex in attentive audiences.

Deep Dive

In a viral essay titled 'Don't Let LLMs Write For You,' writer and editor Justis Mills argues that AI-generated prose is immediately detectable and repellant to serious readers. He contends that human writing serves as evidence of human thinking—clear prose suggests a refined idea. LLMs violently break this correlation by producing polished text from vague or stupid premises, creating a facade of authority without substance. Readers who catch the 'LLM smell' stop reading faster than they would for typos or bad ideology, as the text signals a lack of genuine, considered thought beneath the surface.

Mills identifies specific stylistic tells of AI writing, including structured lists with bold headers, splashy contrastive disclaimers every few sentences, and an overall 'same-y,' slog-like quality with too much framing. He admits to personal temptation, recounting a moment where he almost used Claude to draft an academic abstract but ultimately rewrote it himself. The core warning is that using an LLM to write for an audience, especially a professional one, risks making them think you lack a good idea and that reading your work will be a chore, causing them to disengage entirely.

Key Points
  • AI-generated prose breaks the reader's trust that clear writing equals clear thinking, creating a facade of authority.
  • Attentive readers develop a 'stop reading' reflex for AI text, triggered by stylistic tells like excessive lists and framing.
  • The essay warns that using LLMs for audience-facing writing can repel the very readers you want to reach, damaging credibility.

Why It Matters

For professionals using AI to communicate, this viral critique highlights a major credibility risk with real audience engagement consequences.