Anthropic's Claude Managed Agents can now "dream," sort of
Claude agents now reflect on past work to store key memories across teams.
At its Code with Claude conference, Anthropic unveiled 'dreaming' for its Managed Agents—a scheduled process that analyzes past sessions and memory stores across agents to identify and store important patterns. Unlike compaction, which cleans up single-conversation context windows, dreaming aggregates insights from multiple agents working on a project over hours. It surfaces recurring mistakes, workflow preferences, and shared patterns that a single agent can't see. Users can opt for automatic memory curation or manually review changes. The feature is in research preview for Managed Agents on the Claude Platform, with developers able to request access.
In addition, Anthropic announced that two previously previewed features—outcomes and multi-agent orchestration—are now more widely available. To address user frustration over compute capacity, the company is doubling five-hour usage limits for Pro and Max subscribers. These moves aim to make Claude more viable for complex, long-running enterprise workflows requiring multi-agent collaboration and persistent memory.
- Dreaming is a scheduled process that reviews past sessions and memory stores to curate important memories across agents.
- Addresses LLM context window limits by identifying patterns invisible to single agents, such as recurring mistakes.
- Usage limits for Pro and Max subscribers doubled; outcomes and multi-agent orchestration now more widely available.
Why It Matters
Enterprise teams can now run complex, multi-agent projects without losing critical context, boosting long-term AI collaboration.