Media & Culture

i started talking to Claude like a caveman. my credits lasted 3x longer. i'm not joking.

Skipping pleasantries cuts token use by 80% and triples free tier messages.

Deep Dive

A viral post reveals a simple but powerful hack for Claude users: treat the AI like a caveman. By removing all pleasantries, apologies, filler context, and closing remarks from prompts, one user found they could slash token usage by roughly 80% while getting the same quality output. On the free tier, this translated to three times more usable messages before hitting the cap. The core insight is that Claude processes only the informational payload—not the social niceties humans add out of habit.

Examples illustrate the shift: instead of 'Hey Claude, I hope this makes sense but I've been working on this project and I'm running into an issue with the function on line 47, it keeps throwing a null error...' (57 words), the caveman version is 'line 47. null error. fix.' (4 words). The same fix results. The framework advises skipping greetings, apologies, backstories, and thank-yous—use verbs only ('summarise,' 'fix,' 'compare'), and symbols ('A vs B?'). The one exception is nuanced creative work, where context matters. For about 70% of daily tasks, caveman mode is a pure efficiency gain.

Key Points
  • Removing polite phrases and filler context from prompts cuts token usage by up to 80%.
  • Free tier users can triple their message count by adopting the 'caveman' approach.
  • The method works for ~70% of daily tasks (code, summarization, formatting) but not for nuanced creative work.

Why It Matters

This token-saving tactic can drastically reduce AI costs or extend free usage, making interactions more efficient for professionals.