Image & Video

v0.16.1

The popular AI workflow tool now supports Kuaishou's advanced video model and xAI's latest pricing.

Deep Dive

Comfy-Org has released version 0.16.1 of ComfyUI, the massively popular open-source visual programming interface for Stable Diffusion and other AI models that has garnered over 105,000 GitHub stars. This incremental update focuses on expanding API integrations, most notably adding official support for Kling 3.0's Motion Control feature—Kuaishou's advanced text-to-video model known for generating high-quality, temporally consistent video clips. The release also updates the xAI (Grok) API nodes to reflect current models and pricing, ensuring users can leverage Elon Musk's AI company's latest offerings within their node-based workflows.

The technical changes, contributed primarily by developer 'bigcat88', represent ComfyUI's continued evolution from a Stable Diffusion-focused tool into a comprehensive hub for multiple AI backends. By integrating Kling 3.0's motion capabilities, ComfyUI users can now orchestrate complex video generation pipelines alongside image generation and post-processing nodes. This release, following just 10 commits since v0.16.0, demonstrates the project's rapid iteration pace and community-driven development model, where power users directly contribute nodes for emerging AI services. The update solidifies ComfyUI's position as the go-to platform for professionals building reproducible, customizable AI media generation pipelines.

Key Points
  • Adds official API node support for Kling 3.0 Motion Control for video generation
  • Updates xAI (Grok) API nodes with current model availability and pricing information
  • Maintains ComfyUI's rapid release cycle with 10 commits since previous version

Why It Matters

Enables professionals to integrate cutting-edge video AI and commercial LLMs into automated, visual workflow pipelines.