v0.21.0
The popular local AI platform now integrates Nous Research's agent that learns and creates new skills automatically.
Ollama, the massively popular open-source platform for running and managing large language models (LLMs) locally, has launched its v0.21.0 update. The headline feature is the official integration of 'Hermes,' an AI agent developed by Nous Research. Unlike standard models, Hermes is designed as a self-improving agent that can automatically create new skills tailored to a user's specific workflows. This makes it particularly powerful for complex, iterative tasks in research and engineering, where it can learn from interactions and build custom tools.
The integration allows users to launch Hermes directly via the Ollama CLI with the command `ollama launch hermes`. This move signifies Ollama's expansion beyond just serving static models into the realm of interactive, agentic AI. Other updates in v0.21.0 include fixes for the `--yes` flag behavior in the OpenClaw integration, inline configuration for OpenCode, and a new integration for GitHub Copilot CLI, making the platform more versatile for developer toolchains.
By bringing a self-improving agent into its ecosystem, Ollama is empowering users to automate and enhance complex local AI workflows. This positions the tool not just as a model runner, but as a platform for intelligent, adaptive assistants that evolve with a user's needs, all while maintaining the privacy and control of local execution.
- Integrates 'Hermes' from Nous Research, a self-improving AI agent that creates new skills automatically.
- Launched via simple CLI command `ollama launch hermes`, targeting research and engineering tasks.
- Part of Ollama v0.21.0 which also includes OpenCode config and Copilot CLI integration fixes.
Why It Matters
Brings adaptive, agentic AI capabilities to local, private environments, transforming Ollama from a model runner into an intelligent workflow platform.