LTX-2.3 22B WORKFLOWS 12GB GGUF- i2v, t2v, ta2v, ia2v, v2v..... OF COURSE!
New workflows for the 22B parameter model fix static i2v, improve audio, and enhance prompt adherence.
Independent AI creator 'urabewe' has published a major update for the LTX-2.3 22B parameter model, releasing new GGUF workflows on the Civitai platform. This release, a follow-up to their previous LTX-2 workflows, focuses on refining the model's complex multimodal capabilities, which include image-to-video (i2v), text-to-video (t2v), text-and-audio-to-video (ta2v), image-and-audio-to-video (ia2v), and video-to-video (v2v) generation. The creator notes the update is functionally similar in structure but delivers on the promised performance claims of the LTX-2.3 base model, urging users to transition from the older V2.
The technical refinements are substantial. The workflows have been updated to use the model's new VAE (Variational Autoencoder), which unfortunately breaks the 'tiny VAE preview' feature but is core to the quality improvements. Problematic audio nodes have been removed to simplify the workflow and prevent generation issues, though they can be re-added by advanced users. The setup still relies on a large 7GB 'distill' LoRA (Low-Rank Adaptation) for model conditioning. The creator confirms tangible gains: significantly better adherence to user prompts, the elimination of static frames in i2v generations, proper support for portrait aspect ratios, less blurry motion, and drastically reduced background buzz in audio outputs, all without nodes that double generation times. This release makes the powerful but resource-intensive 22B model more accessible and reliable for creators experimenting with next-gen AI video synthesis.
- Workflows updated for the LTX-2.3 22B parameter model, fixing static i2v and improving prompt adherence.
- Uses a 7GB distill LoRA and a new VAE, removing buggy audio nodes to simplify the generation process.
- Delivers functional portrait resolutions, less blurry movement, and significantly improved audio quality over previous versions.
Why It Matters
This makes advanced, open-source AI video generation more stable and accessible for creators, pushing the boundaries of local multimodal AI.