Image & Video

Trained my first Klein 9B LoRA on Strix Halo + Linux

A photographer created a custom Klein 9B LoRA with just 55 personal photos in 6 hours, sharing the full open-source workflow.

Deep Dive

Photographer mikkoph successfully trained his first custom AI style model, creating a personal Klein 9B LoRA (Low-Rank Adaptation) using just 55 of his own photographs. The experiment demonstrates how creators can train specialized AI models that capture their unique artistic style while maintaining copyright control over training data.

Technically, mikkoph used SimpleTuner with AMD's ROCm nightly 7.12 framework on consumer hardware, running training for 1000 steps over approximately 6 hours. Key parameters included a learning rate of 4e-4 (accidentally higher than intended), rank 16, and EMA enabled without quantization. The model was trained using only the trigger phrase "by mikkoph" without detailed captions, following SimpleTuner's Flow 2 setting optimized for detail capture rather than broad style transfer.

The resulting LoRA shows strong performance in text-to-image generation but limited effectiveness in image-to-image applications, particularly struggling unless source images are studio shots. mikkoph found the checkpoint at 600 steps performed better than the final 1000-step version, which required higher application strength for noticeable effects. The model integrates well with other style LoRAs, though its influence becomes more subtle in combination.

This experiment represents a significant step toward democratized AI model training, showing that creators with moderate technical skills can develop personalized AI tools using their own copyrighted content. The open-source workflow and shared HuggingFace model (mikkoph/mikkoph-style) provide a template for other photographers and artists to follow, potentially reducing reliance on generalized AI models that may not capture individual artistic nuances.

Key Points
  • Trained Klein 9B LoRA using only 55 personal photos with full copyright ownership
  • 6-hour training time on AMD ROCm hardware using SimpleTuner with 4e-4 learning rate and rank 16
  • Model excels at text-to-image but limited in image-to-image applications, works best with studio shots

Why It Matters

Enables creators to train personalized AI models with their copyrighted content, democratizing custom AI development for individual artistic styles.