v0.18.3
Microsoft's VS Code now lets developers select any Ollama model via GitHub Copilot for local AI coding.
Ollama, the popular open-source platform for running large language models locally, has released version 0.18.3. The standout feature is a new integration that connects Ollama directly to Microsoft's Visual Studio Code through GitHub Copilot. This means developers who have Ollama installed can now select any local or cloud-hosted model from Ollama's extensive library—including models like Llama 3, Mistral, or CodeLlama—as their AI assistant directly within the VS Code interface. The integration streamlines the workflow for AI-powered coding, eliminating the need to switch between applications or manually configure API endpoints.
Beyond the VS Code integration, the update includes technical improvements to enhance functionality and security. Developers will notice GLM parser improvements, which refine how the system handles tool calls—a key feature for AI agents that can execute code or interact with external systems. Additionally, OpenClaw integration improvements provide better gateway checks, strengthening security protocols when models connect to external services. These updates follow Ollama's rapid growth, with the project now boasting over 166,000 GitHub stars, reflecting its importance in the local AI ecosystem where privacy, cost control, and offline capability are priorities.
- Direct VS Code integration via GitHub Copilot lets developers select any Ollama model (local or cloud) as their AI assistant within the IDE.
- Includes GLM parser improvements for more reliable AI tool calls and OpenClaw integration enhancements for better gateway security checks.
- Strengthens Ollama's position in the local AI ecosystem, which prioritizes developer privacy, cost control, and offline model execution.
Why It Matters
This bridges the gap between local AI experimentation and professional development workflows, making powerful, private models a seamless part of the coding process.