[Release] Three faithful Spectrum ports for ComfyUI — FLUX, SDXL, and WAN
Faithful ports for FLUX, SDXL, and WAN models achieve near 5x speedup without quality loss.
An independent developer has released three new, specialized ports of the Spectrum acceleration method for ComfyUI, targeting FLUX, SDXL, and WAN models separately. Spectrum, a training-free diffusion acceleration technique from Stanford (CVPR 2026), works by caching the final hidden feature from the denoiser network on selected steps and using a small Chebyshev + ridge regression forecaster to predict that feature on skipped steps. This allows the model to run the normal output head on the predicted feature, drastically reducing the number of expensive forward passes. The paper reports speedups of up to 4.79x on FLUX.1 and 4.67x on Wan2.1-14B, using only 14 network evaluations instead of the typical 50, while outperforming prior caching methods like TaylorSeer by avoiding compounding approximation errors.
These new ports fix critical issues found in existing ComfyUI implementations, such as incorrect prediction targets, runtime leakage across model clones, and hard-coded step normalizations. Each port is tailored to its specific backend's correct integration point. For example, the FLUX node intercepts the final hidden image feature after the single-stream blocks and before the final_layer, matching the official integration. A key innovation is the `tail_actual_steps` parameter, which reserves the last few solver steps for real forwards to preserve fine-grained detail during final refinement. All three nodes are available for easy installation via ComfyUI Manager, requiring no extra dependencies.
- Achieves up to 4.79x speedup on FLUX.1 by reducing network evaluations from 50 to 14 steps
- Fixes critical bugs in existing ports like wrong prediction targets and runtime leakage across model clones
- Introduces a `tail_actual_steps` parameter to preserve fine detail by forcing real forwards at the end of generation
Why It Matters
Dramatically reduces image generation time for professionals without sacrificing quality, enabling faster iteration and higher throughput.