Models & Releases

Liquid AI's LFM2.5-350M: Tiny Edge Model Masters Tool Use & On-Device Agents!

This 350M-parameter edge model masters complex tool calling and runs autonomous agents directly on your device.

Deep Dive

Liquid AI has released LFM2.5-350M, a compact 350-million-parameter language model engineered specifically for edge deployment. Unlike typical small models focused on basic chat, LFM2.5-350M is built with advanced capabilities for tool use and agentic workflows. This means it can understand instructions to call external tools—such as a database query, a calculator, or a weather API—and execute multi-step tasks autonomously. By running these 'agents' directly on-device, it eliminates the need for constant cloud communication, which is a breakthrough for privacy-sensitive applications and environments with poor connectivity.

The model's architecture is optimized for efficiency, allowing it to perform these complex functions with a relatively tiny parameter count. This makes it feasible to deploy on consumer hardware like smartphones, embedded systems in cars, or industrial IoT sensors. Developers can now build applications where an AI agent on a phone can, for example, autonomously gather local sensor data, analyze it using on-device tools, and trigger actions—all without sending personal data to a server. This shifts the paradigm from cloud-dependent AI to distributed, private, and instantaneous intelligent processing at the source of data generation.

Key Points
  • A 350M-parameter model specifically architected for tool calling and running autonomous agent workflows.
  • Enables complex, multi-step tasks to be executed entirely on-device, removing cloud dependency for privacy and speed.
  • Targets deployment on edge hardware like smartphones, IoT devices, and laptops for real-time, local AI processing.

Why It Matters

It enables powerful, private AI assistants and automation to run anywhere, breaking dependence on cloud servers and data centers.