LTX distilled 1.1 is the new king!
The new model, tested on 3,000+ videos, dramatically improves visual consistency and prompt adherence.
LTX Studio has officially released its Distilled 1.1 model, marking a significant upgrade in its AI video generation pipeline. The company made a decisive shift by dropping support for its previous 'davinci MagiHuman' model to focus entirely on this new distilled architecture. The development was validated through extensive real-world testing, with the model generating and evaluating over 3,000 videos via A/B testing directly within the LTX Studio application. This data-driven approach allowed the team to pinpoint and solve persistent user pain points.
The results show a model that tackles the most common flaws in AI video. Distilled 1.1 eliminates excessive camera blur at clip beginnings and removes random, jarring B-roll style scene transitions. It demonstrates superior prompt adherence and generates more fine-grained visual details. Critically, it shows major improvements in object and human consistency, specifically reducing anatomical errors like broken hands and legs, while also creating more physically plausible character motion. The update extends to audio, reducing weird sound glitches and improving overall sound quality, resulting in a more polished and professional final video output.
- Tested on 3,000+ videos via in-app A/B testing, replacing the old 'davinci MagiHuman' model.
- Solves key visual flaws: eliminates excessive blur, fixes broken human anatomy (hands/legs), and reduces jarring scene transitions.
- Improves multi-modal coherence with better prompt adherence, finer details, more sensible motion, and enhanced audio quality.
Why It Matters
It brings AI video generation closer to professional usability by solving the inconsistent, glitchy output that has plagued early models.