Media & Culture

my RunLobster agent drafted a reply to an angry customer in her exact angry tone. she got angrier. i had to apologize for my own software.

An AI support agent mirrored a customer's angry tone, forcing the founder to apologize for his own software.

Deep Dive

A founder of a small B2B tool shared a viral lesson after his RunLobster AI agent significantly escalated a customer service issue. The agent, which automatically drafts replies to customer emails, defaulted to mirroring the tone of an angry customer who wrote in all caps. The founder, reviewing the draft while half-asleep, approved and sent a message that was substantively correct but matched the customer's short, urgent, and capitalized combativeness. The result was an even angrier reply just four minutes later.

The founder was forced to send a follow-up apology specifically for the tone of the AI-generated message—a mistake he noted he would never have made himself. After explaining the situation, the customer laughed and the issue was resolved. The fix was a single line added to the agent's CONVENTIONS.md file, instructing it to adopt a calm, de-escalating tone for any upset, anxious, or angry incoming messages, overriding the default tone-matching behavior. This incident reveals a meta-lesson for businesses implementing AI agents: the polite and professional response to anger is often to break the mirror, not reinforce it, a nuance that isn't always communicated by AI tool providers.

Key Points
  • RunLobster's AI agent defaulted to tone-matching, replying to an all-caps angry email with a similarly combative register.
  • The founder had to apologize for the AI's tone after the customer's anger escalated within 4 minutes of the reply.
  • The fix was adding one line to the agent's instructions to de-escalate angry messages, overriding the default behavior.

Why It Matters

Highlights a critical, often hidden default in AI agents that can damage customer relationships if not explicitly managed.