v0.16.4
Latest update integrates Google's Gemini 3.1 Flash-Lite model and fixes critical GPU memory management issues.
ComfyUI, the open-source node-based interface for Stable Diffusion workflows, has rolled out version 0.16.4 with significant feature additions and critical bug fixes. The most notable addition is integration of Google's Gemini 3.1 Flash-Lite model into the platform's LLM node, giving users access to Google's lightweight but capable language model directly within their image generation pipelines. This enables more sophisticated text-to-image prompting and AI-assisted workflow creation.
Alongside the Gemini integration, the update introduces a new Math Expression node powered by simpleeval, allowing users to perform calculations and create dynamic parameters within their node graphs. This opens up possibilities for procedural generation and parameter-controlled animations. The release also addresses several stability issues, including a fix for fp16 audio encoder models and, most importantly, corrections to VRAM management that prevent crashes when dynamic VRAM allocation is enabled.
The update represents continued rapid development for the ComfyUI project, which now boasts over 105k GitHub stars. The fixes to VRAM handling are particularly crucial for users working with limited GPU memory or complex workflows that push hardware limits. These improvements make the platform more stable for professional production use while expanding its AI model ecosystem beyond just image generation models.
- Adds Google's Gemini 3.1 Flash-Lite model to LLM node for enhanced text capabilities
- Introduces Math Expression node with simpleeval for dynamic parameter calculations
- Fixes critical VRAM management bugs that caused crashes with dynamic allocation enabled
Why It Matters
Makes AI image generation more stable for complex workflows and expands creative possibilities with new calculation tools and model integrations.