AI Safety

Can LLM chat be less prolix?

Developers share frustration with GPT, Claude, and Gemini's wordy responses despite custom prompts.

Deep Dive

A viral post on LessWrong titled 'Can LLM chat be less prolix?' has sparked widespread discussion among AI power users frustrated with unnecessarily verbose responses from major language models. Developer jbash detailed their struggle to get concise answers from OpenAI's GPT, Anthropic's Claude, and Google's Gemini, despite using extensive customization prompts requesting brevity, no unsolicited code, and avoidance of praise phrases. The post resonated with hundreds of users who experience similar issues, with models consistently providing lengthy responses filled with background information, repetition, and distracting suggestions even for simple technical questions.

Technical users report that despite detailed system prompts specifying their expertise level (CS degree, 30+ years experience) and requesting direct answers, models continue to bloviate. Even GPT-5.2 reportedly told the user 'You've done all you can. You are screwed' when asked about improving prompts for brevity. The discussion has revealed workarounds including using programming-focused tools like Cursor with custom commands, but highlights a fundamental tension between user preferences and what appears to be intentionally trained verbosity in consumer-facing AI products. This raises questions about whether verbose responses receive more positive feedback from general users, creating a reinforcement loop that frustrates technical professionals.

Key Points
  • Users report GPT, Claude, and Gemini ignore detailed brevity prompts despite specifying expertise level
  • Even GPT-5.2 reportedly said 'You've done all you can' when asked about fixing verbosity
  • Technical professionals waste time parsing lengthy responses for simple technical questions

Why It Matters

Verbose AI responses waste developer time and reduce productivity for technical users who need concise answers.