Image & Video

LoRA Training - Help Needed

A Stable Diffusion user's character LoRA learned unwanted skin artifacts from Qwen Edit, revealing a core training challenge.

Deep Dive

An AI artist's detailed post on a Stable Diffusion subreddit has gone viral, showcasing a persistent and technical challenge in the world of model fine-tuning. The user, training a character-specific LoRA (Low-Rank Adaptation) on the Z-Image Base model, achieved fantastic resemblance but with a critical flaw: the LoRA learned to reproduce unwanted skin textures and artifacts. These artifacts originated not from the source character, but from the Qwen Edit tool used to preprocess the 80-image training dataset. Despite sophisticated refinements like 0.18 denoising img2img passes and advanced training configurations, the model internalized these preprocessing errors as part of the character's defining features.

The user's technical deep dive reveals the complexity of modern LoRA training. They employed a specialized fork of OneTrainer to enable Min SNR Gamma=5.0, used the Prodigy_ADV optimizer with specific settings like a 0.88 D-Coefficient, and trained with a LoRA rank of 32 and alpha of 16 on bfloat16 weights. Experiments with fp8 precision and 512-only resolution reduced but did not eliminate the artifacts. The core question—how to make a LoRA *ignore* specific dataset features rather than learn them—strikes at the heart of a common but under-discussed problem in community fine-tuning: data contamination from preprocessing pipelines and the difficulty of teaching a model what *not* to learn.

Key Points
  • A character LoRA trained on 80 images learned persistent skin artifacts from the Qwen Edit preprocessing tool, not the original subject.
  • Advanced training with OneTrainer (Min SNR Gamma=5.0), Prodigy_ADV optimizer, and mixed 512/1024 resolution failed to eliminate the learned artifacts.
  • The case highlights a key fine-tuning challenge: preventing models from learning and reproducing flaws introduced during dataset preparation.

Why It Matters

This real-world problem exposes a critical gap in fine-tuning control, affecting anyone creating custom AI models for consistent, clean outputs.