Thanks to the sub my silly node and workflow got 3k downloads overnight, therefore I fixed some bugs, unified some features, and uploaded the latest and the greatest version to HF.
Procedural prompt system for Qwen cuts typing and ensures character consistency.
What started as an internal tool for a secret project has turned into a viral hit. Reddit user Mundane-Ad-5737 shared their ComfyUI Character Composer node, and downloads jumped from about 160 to over 3,000 in a single night. The node is a structured procedural prompt system built on top of Phr00t’s Qwen-Image-Edit-Rapid-AIO model. It focuses on character consistency, scene composition, controllable generation, and includes an SFW JSON library for managing prompts. A key feature is the unified txt2img + img2img workflow — users can bypass the image input to switch modes, drastically reducing the need to type or copy-paste from an LLM.
The developer is now improving UX, simplifying the node, and preparing better documentation and tutorials based on community feedback. The project is hosted on Hugging Face under the dataset “comfyui-character-composer.” As a self-described newbie, the creator is learning fast and promises future updates. This rapid adoption highlights the demand for streamlined, consistent character generation tools in the ComfyUI ecosystem.
- Downloads surged from ~160 to 3,000+ overnight after a Reddit post.
- Node provides a procedural prompt system for character consistency and scene control.
- Unifies txt2img and img2img workflows, reducing reliance on manual prompting.
Why It Matters
Speeds up iterative character generation, enabling consistent outputs with less manual prompt engineering for AI artists.