PixelSmile - A Qwen-Image-Edit lora for fine grained expression control . model on Huggingface.
New open-source model lets you blend emotions with sliders, working on both real photos and anime with minimal identity loss.
A new open-source AI tool called PixelSmile is making waves for its precise control over facial expressions in images. It's a LoRA (Low-Rank Adaptation) model built on top of Alibaba's Qwen-Image-Edit foundation model. The core innovation is its fine-grained editing capability: users can manipulate 12 distinct facial expressions—like happiness, sadness, or surprise—using smooth intensity sliders and even blend multiple emotions in a single image. This works on a diverse range of inputs, from real photographs to anime-style artwork.
The technical backbone combines symmetric contrastive training with flow matching, a method that helps guide the image generation process for more controlled and accurate edits. According to the developers, this approach yields 'insanely clean' results with a key benefit: 'almost zero identity leak,' meaning the core person's likeness is preserved while only the expression changes. The project includes a comprehensive paper full of examples and a demo page with interactive sliders, all hosted on Hugging Face, making it highly accessible for developers and creators to experiment with.
- Enables control of 12 distinct facial expressions via adjustable intensity sliders and emotion blending.
- Uses symmetric contrastive training and flow matching on Qwen-Image-Edit for high-fidelity, identity-preserving edits.
- Works on both real photographs and anime, with model and paper available now on Hugging Face.
Why It Matters
This democratizes high-quality, nuanced facial editing for creators and developers, moving beyond simple filters to emotion-as-a-slider control.