I trained a matchbox-poster LoRA on FLUX.2 — running 24/7, generating ~2,880 unique animals/day
A 50MB adapter trained on Soviet matchbox labels runs 24/7 at $0.155/hr.
Deep Dive
Reddit user Maleficent-Week-2064 trained a LoRA (rank 32, alpha 64) on public‑domain Soviet matchbox labels using attention‑only target modules. A two‑pass pipeline — first LoRA text‑to‑image at scale 2.0 for 22 steps, then pure FLUX img2img at strength 0.9 for 31 steps — kills LoRA artifacts while preserving the vintage Soviet style. Each image takes about 14 seconds on an RTX 3090. The live feed at pinock.io shows every output with no signup and free downloads.
Key Points
- LoRA rank 32 / alpha 64 with attention‑only targets trained on ~200 Soviet matchbox label scans (public domain).
- Two‑pass pipeline: LoRA t2i at scale 2.0 + pure FLUX img2img at strength 0.9, 14s per image on a 3090.
- Live feed at pinock.io generates ~2,880 unique animals/day 24/7 for $0.155/hr on vast.ai.
Why It Matters
Shows how cheaply a finely‑tuned open model can run continuous, artifact‑free generative art at scale.