LTX2.3 in Ostris Ai toolkit on a 5090 Training done in 7 hours ... I went Thanos way and I said fine ... I'll do it myself
Custom LoRA training on a 5090 now takes just 7 hours with optimized settings.
Ostris AI's LTX2.3 toolkit has been optimized for the NVIDIA RTX 5090, reducing LoRA training time to 7 hours across 3-4 phases. The first phase runs 600 steps with a LoRA rank of 48, gradient accumulation set to 2, and differential guidance at 3. Training uses 25-frame clips at 512x512 resolution, with repeats adjusted to total 100 clips. Cache text embeddings are enabled to save time, and samples are generated at 49 frames with guidance scale 10 to ensure quality. The second phase extends to 1200 steps, further refining accuracy. The workflow is designed to avoid temporal collapses and inaccuracies common with default settings, offering a reliable method for custom model training on high-end hardware.
This approach leverages the RTX 5090's VRAM to max out performance, with the first phase taking 3.5 hours. The toolkit's low VRAM option is available for other cards, though training will be slower. The user shares specific settings to avoid common pitfalls, such as setting steps to 700 to account for software quirks. The result is a LoRA that closely matches the source model, with samples checked at each phase to ensure quality. This method provides a practical solution for professionals needing fast, accurate custom AI models without the usual frustration of long training times or poor results.
- LoRA training on RTX 5090 completes in 7 hours across 3-4 phases.
- Key settings: LoRA rank 48, 600 steps per phase, differential guidance at 3, 512x512 resolution.
- Uses 25-frame clips, cached text embeddings, and guidance scale 10 for high-quality samples.
Why It Matters
Enables fast, accurate custom AI model training on high-end hardware, saving professionals time and frustration.