I Am Begging AI Companies to Stop Naming Features After Human Processes
Anthropic's new 'dreaming' feature lets agents self-improve – but critics cry foul over anthropomorphizing.
Anthropic unveiled a new feature called 'dreaming' at its developer conference in San Francisco, part of its recently launched AI agent infrastructure. The tool works by sorting through transcripts of an agent's completed tasks, identifying patterns in the activity log, and refining the agent's memory between sessions to improve future performance. This is designed to help agents that execute multi-step workflows—such as visiting multiple websites or reading various files—to automatically learn from their own history. Anthropic describes memory and dreaming as forming 'a robust memory system for self-improving agents,' available now as a research preview for developers.
However, the naming has drawn sharp criticism from industry observers and ethicists. The article argues that AI companies have increasingly branded features using human cognitive terms—OpenAI's 'reasoning' and 'thinking' models, startups referencing chatbot 'memories'—blurring the line between human and machine capabilities. Anthropic's own constitution even describes Claude using terms like 'virtue' and 'wisdom.' Research in the AI & Ethics journal warns that such anthropomorphism can distort moral judgments, trust, and responsibility. By labeling a machine-learning process as 'dreaming,' companies risk leading users to project human-like consciousness onto systems that, while powerful, are fundamentally statistical pattern matchers—echoing the Philip K. Dick novel where a protagonist mistakes a robotic toad for a living animal.
- Anthropic's 'dreaming' feature analyzes agent activity logs between sessions to improve future performance.
- Critics argue naming features after human cognitive processes (reasoning, memory, dreaming) fosters over-trust and moral confusion.
- Anthropic's own constitution and marketing use human-like terms (virtue, wisdom) for Claude, deepening the anthropomorphism debate.
Why It Matters
Professionals must critically assess AI capabilities, avoiding over-attribution of human traits that can distort trust and accountability.