What's the best way to transfer style to Klein 9b?
Creators are hacking open-source models to mimic Midjourney's signature cinematic style, finding LoRAs insufficient.
A significant technical challenge is gaining traction in the AI art community: replicating the high-fidelity, cinematic style of Midjourney using leading open-source models like Klein 9B, Flux 1.1, and Z Image Turbo. Users report that while these models are powerful, dedicated aesthetic LoRAs (Low-Rank Adaptations) fail to capture the specific visual language—such as dramatic cloud formations and lighting—found in popular Midjourney outputs. This has sparked a hunt for more effective style transfer techniques that can work from single or multiple reference images.
The core of the issue is a perceived quality gap between proprietary and open-source aesthetics. The poster notes that even newer models like Ernie and Z Image cannot match the desired 'cinematic' look through standard fine-tuning. This demand is driving experimentation with advanced nodes in workflows (potentially in ComfyUI or Forge) designed for direct style extraction and application, moving beyond simple prompting or existing LoRA libraries to achieve professional-grade results without a Midjourney subscription.
- Open-source models like Klein 9B and Flux lack ready-made tools to replicate Midjourney's distinct cinematic aesthetics.
- Community reports that specialized LoRAs for style are insufficient, creating demand for new style transfer nodes or methods.
- The pursuit highlights a key market gap for professional-grade style replication tools in the open-source AI art ecosystem.
Why It Matters
This push could accelerate development of better open-source style tools, reducing reliance on closed platforms like Midjourney for professional work.