Testing LTX-Video 2.3 — 11 Models, PainterLTXV2 Workflow
A deep-dive test of 11 LTX-Video 2.3 models reveals performance trade-offs and workflow challenges.
A detailed community benchmark of Lightricks' LTX-Video 2.3 model suite reveals the practical challenges and trade-offs in cutting-edge AI video generation. The tester evaluated 11 different model variants, including the full 43GB `ltx-2.3-22b-dev.safetensors` from Lightricks and more efficient quantized versions like the 20.2GB `ltx-2.3-22b-dev-nvfp4.safetensors`. The test environment was a ComfyUI setup powered by an NVIDIA RTX 5060 Ti GPU with 16GB VRAM, aiming to push the limits of what's possible for creators without enterprise-grade hardware.
Despite the powerful specs, the tester reported significant hurdles. Official workflows from Lightricks and others were described as "too bloated and unclear," leading to a reliance on a simplified workflow based on the community-developed `ComfyUI-PainterLTXV2`. A key finding was performance inconsistency: while standard models ran predictably, GGUF-quantized models from unsloth exhibited a strange bug where upscale iteration times ballooned on subsequent runs, drastically inflating total generation time. The tester concluded they haven't "managed to get truly clean results yet," underscoring the gap between published demos and reproducible, high-quality output for end-users navigating complex model ecosystems.
- Tested 11 variants of the 22B-parameter LTX-Video 2.3 model, with file sizes ranging from 20.2GB to 43GB.
- Found GGUF-quantized models had a critical bug causing upscale iteration times to multiply on subsequent runs.
- Achieving the high-quality results seen in online demos remains challenging with current community workflows.
Why It Matters
High-quality open video models are advancing, but real-world usability and consistent performance remain significant hurdles for creators.