Developer Tools

v0.17.4

Latest update brings powerful open-source multimodal models to developers' local machines.

Deep Dive

Ollama, the open-source project enabling developers to run large language models locally, has rolled out version 0.17.4. This incremental release focuses on expanding the ecosystem of available models by adding official support for two significant open-source families: the Qwen 3.5 series from Alibaba and the LFM2 family from Liquid AI. The Qwen 3.5 models are noted for their multimodal capabilities, allowing them to process both text and images, which opens up new local development possibilities for vision-language tasks. Meanwhile, the LFM2 models are engineered specifically for efficient on-device deployment, a critical consideration for edge computing and privacy-focused applications.

The technical highlight is the inclusion of LFM2-24B-A2B, a 24-billion parameter model designed to maintain inference efficiency despite its scale. This addresses a key challenge in local AI: balancing model capability with resource constraints. The update also resolves an issue with tool call indices during parallel tool calls, improving reliability for developers building AI agents that can execute multiple functions simultaneously. For the Ollama community, which has garnered over 164k GitHub stars, this release continues the project's mission to democratize access to cutting-edge AI by making powerful models runnable on standard developer hardware without cloud dependencies.

Key Points
  • Adds support for Qwen 3.5, a family of open-source multimodal AI models for text and vision tasks.
  • Introduces the LFM2 model family, including the 24-billion parameter LFM2-24B-A2B, optimized for efficient on-device inference.
  • Fixes tool call indices in parallel tool calls, improving stability for developers building multi-function AI agents.

Why It Matters

Expands the toolbox for local AI development, giving developers more powerful and efficient open-source models to build privacy-preserving and cost-effective applications.