Has anyone made anything decent with ltx2?
The AI video community is abandoning LTX2 for WAN 2.2. Here's why.
Deep Dive
A viral discussion reveals widespread disappointment with the LTX2 AI video model. Users report it excels only at static 'talking head' shots, failing at character movement and generating 'strange plastic' faces in image-to-video. Audio quality is inconsistent, and there's a severe lack of community LoRA models. The consensus is a mass return to the older, more reliable WAN 2.2 for cinematic content, questioning LTX2's readiness.
Why It Matters
This signals a potential flop for a major AI video tool, forcing creators to revert to older technology and stalling innovation.