Developer Tools

v0.16.3

The latest update brings official Cline CLI integration and expands MLX runner support to include Google's Gemma 3.

Deep Dive

Ollama, the popular open-source platform for running large language models locally, has released version 0.16.3. The update introduces a new 'ollama launch' CLI command specifically for the Cline CLI integration, which now consistently shows a model picker interface. A significant technical enhancement is the expansion of the MLX runner to support three new model architectures: Google's recently announced Gemma 3, Meta's Llama family, and Qwen 3. This release follows Ollama's growing adoption, with the project now boasting 163k GitHub stars. The improvements streamline the workflow for developers who use Ollama to test and deploy models on their local machines, with particular benefits for Apple Silicon users leveraging the MLX framework for hardware-accelerated inference.

Key Points
  • Adds 'ollama launch' CLI command for Cline integration with persistent model picker
  • Expands MLX runner support to include Google's Gemma 3, Meta Llama, and Qwen 3 architectures
  • Enhances local AI development workflow, especially for Apple Silicon users via MLX framework

Why It Matters

Simplifies local AI model management and expands hardware-optimized options for developers testing cutting-edge architectures.