Image & Video

Dialed in the workflow thanks to Claude. 30 steps cfg 3 distilled lora strength 0.6 res_2s sampler on first pass euler ancestral on latent pass full model (not distilled) comfyui

A custom 30-step workflow using Claude for parameter tuning achieves precise image control with specific LoRA and sampler settings.

Deep Dive

A Reddit user known as u/RainbowUnicorns has gone viral in the AI image generation community by sharing a meticulously tuned workflow for ComfyUI, the popular node-based interface for running Stable Diffusion models. The workflow, developed with the assistance of Anthropic's Claude AI, specifies a 30-step generation process with a Classifier-Free Guidance (CFG) scale set to 3. A key component is the use of a distilled Low-Rank Adaptation (LoRA) model—a small, fine-tuned add-on—applied at a strength of 0.6 to steer the style without overpowering the base model.

The technical core of the workflow is a novel two-pass sampler strategy. The first pass uses a custom 'res_2s' sampler on a distilled version of the model, followed by a second 'latent pass' using the Euler ancestral sampler on the full, non-distilled model. This hybrid approach aims to combine the speed and coherence benefits of a distilled model with the final detail and quality of the full model. The user notes they use consistent 'litmus test' prompts to benchmark performance, suggesting a methodical, repeatable approach to parameter optimization that moves beyond guesswork.

Key Points
  • Workflow uses a 30-step process with CFG scale 3 and a distilled LoRA at strength 0.6.
  • Employs a novel two-pass sampler: 'res_2s' on a distilled model, then Euler ancestral on the full model.
  • Developed using Claude AI for parameter tuning and tested with consistent prompts for reliable benchmarking.

Why It Matters

It demonstrates a shift from random prompting to engineered, reproducible workflows for professional-grade AI image generation.