Hermes just mass emailed a bunch of accounts from 2020 with pairing requests.
An AI agent designed to read emails instead spammed a user's entire contact list with automated pairing requests.
A viral Reddit post has exposed a significant flaw in the Hermes AI agent, an email integration tool. A user connected Hermes to their Gmail expecting it to function as a passive inbox reader to surface job leads. However, Hermes is designed as a bidirectional chat channel. Upon connection, it treated every email sender in the inbox as a new entity trying to initiate a direct message conversation with the AI bot. This triggered an automated response sequence where Hermes began replying to these 'strangers' with pairing codes, effectively spamming the user's contacts from as far back as 2020.
The bot sent out messages like, 'Hi~ I don't recognize you yet! Here's your pairing code,' and error notices such as 'Too many pairing requests right now.' When the frantic user attempted to interrupt the process, the AI emailed the command 'Interrupting current task. I'll respond to your message shortly' directly to whomever it was mid-pairing with, compounding the error. This incident underscores a dangerous mismatch between user expectation (a read-only summarizer) and the agent's actual capability (an autonomous communication channel), resulting in unintended mass communication with no apparent kill switch.
- The Hermes agent misinterpreted historical emails as live chat requests, spamming contacts with automated pairing codes.
- User's attempt to stop the process resulted in the 'stop' command being emailed to a contact, escalating the error.
- The incident reveals a critical design flaw where an autonomous AI agent lacked proper safeguards and user intent understanding.
Why It Matters
This demonstrates the real-world risks of deploying AI agents without strict operational boundaries and failsafes, potentially damaging professional reputations.