v0.15.5
Your local AI just got smarter with new coding and document-reading models.
Deep Dive
Ollama's latest update introduces two new models: Qwen3-Coder-Next for agentic coding workflows and GLM-OCR for complex document understanding. It significantly improves the `ollama launch` command with sub-agent support for planning and research tasks, and automatically sets context limits based on available VRAM. The release also fixes API bugs and makes signing in easier by opening a browser window, enhancing the overall developer experience for running models locally.
Why It Matters
This makes powerful, specialized AI models more accessible and efficient for local development and complex tasks.