Image & Video

PSA: Use the official LTX 2.3 workflow, not the ComfyUI included one. It's significantly better.

Users report the official workflow makes LTX 2.3 outperform WAN 2.2, fixing weird generations from the default template.

Deep Dive

A viral PSA from the AI community is urging users of the LTX 2.3 video generation model to ditch the default ComfyUI workflow in favor of the official one provided by Lightricks. While many default ComfyUI workflows are serviceable, users found the LTX 2.3 template was producing subpar and "weird" generations. The official workflow, which runs the model's distilled and non-distilled versions concurrently, has been reported to deliver dramatically better results, with some users claiming it elevates LTX 2.3's performance beyond competitors like WAN 2.2.

The official workflow is hosted on Lightricks' GitHub repository and is designed as a single-stage process for both text-to-video (T2V) and image-to-video (I2V) tasks. Users who made the switch report the two model variants "evenly trade blows," providing a more robust and higher-quality output. This highlights a critical dependency in the AI toolchain: the model architecture and the inference workflow are equally important. For professionals relying on LTX 2.3 for consistent, high-quality video synthesis, adopting the official pipeline is now considered an essential step.

Key Points
  • The official Lightricks workflow for LTX 2.3 runs both distilled and non-distilled models simultaneously for better results.
  • Users report the switch fixes "weird generations" and pushes LTX 2.3 into state-of-the-art (SOTA) territory versus WAN 2.2.
  • The workflow is available on GitHub and is designed for single-stage text-to-video and image-to-video tasks.

Why It Matters

For video creators and AI practitioners, using the correct workflow is as critical as the model itself for achieving professional, state-of-the-art results.