Providing a Working Solution to Z-Image Base Training
A Reddit user's config solves convergence issues, confirmed by multiple testers using a specific OneTrainer fork.
A community researcher has published a verified training configuration for Z-Image Base (ZiB) models. The solution requires a specific fork of OneTrainer and uses Min_SNR_Gamma=5 with the Prodigy_adv optimizer to fix convergence problems. Users can now train stable ZiB LoRAs, which are only compatible with specific distills like RedCraft, not the newer Z-Image Turbo (ZiT). The provided JSON config includes optimized settings for GPUs like the RTX 3090.
Why It Matters
Enables stable fine-tuning of a powerful open-source image model, unlocking custom AI art generation without commercial platform fees.