Atelier: a canvas for thinking and making with local models.
A research prototype from Autodesk uses ComfyUI to turn complex AI processes into simple, visual widgets.
A small team at Autodesk, led by researcher David Ledo and an intern, has unveiled a research prototype called 'Atelier' at the CHI conference. The system is designed as a visual canvas for thinking and creating using local generative AI models, positioning itself as a tool for creative exploration rather than just output generation. Its key technical innovation is using the popular node-based workflow tool ComfyUI as a backend engine, allowing users to build and run complex AI processes—like image generation chains or multi-step reasoning tasks—without coding. These processes are then packaged into simple, reusable widgets on the canvas, making advanced local AI more accessible and process-oriented.
The project, detailed in a publicly available research paper, is currently an early prototype not yet ready for public release. However, the Autodesk team is actively gauging community interest via social media to determine if there's sufficient demand to invest in developing a more robust version. This next phase could include a public release and is even considering an open-source model. The vision for Atelier is to lower the barrier to using powerful local models like Stable Diffusion or Llama by providing an intuitive, visual interface that emphasizes the iterative 'making' journey, which could appeal to designers, artists, and researchers who want control without deep technical expertise.
- Built on ComfyUI backend to power complex, local AI workflows without coding.
- Encapsulates workflows into visual widgets, focusing on the creative process over just the final output.
- Autodesk is gauging public interest to decide on further development or a potential open-source release.
Why It Matters
It could make advanced local AI models far more accessible to creatives and professionals through a visual, process-focused interface.