Image & Video

Benchmark Report: Wan 2.2 Performance & Resource Efficiency (Python 3.10-3.14 / Torch 2.10-2.11)

New benchmarks show Torch 2.11 cuts memory usage for Wan 2.2 video generation by nearly 4%.

Deep Dive

A new benchmark report provides crucial performance data for users of the Wan 2.2 video generation model, revealing that a simple software update can yield significant efficiency gains. The tests, conducted across Python 3.10 to 3.14 and PyTorch 2.10 to 2.11, show that while generation speed remains largely unchanged, upgrading to Torch 2.11.0 directly reduces memory footprint. Specifically, RAM consumption decreased from 63.4 GB to 61 GB, a 3.79% reduction, while VRAM usage fell from 35.4 GB to 34.1 GB, a 3.67% drop. This optimization is consistent across all tested Python environments, making it a reliable upgrade for any Wan 2.2 workflow.

For users leveraging the Sage-Attn 2.2 optimization, the 'FAST' mode demonstrated a dramatic performance improvement, cutting total generation time by nearly 50%—from over 544 seconds in 'NORMAL' mode down to around 280 seconds. The benchmark was run on a system with an NVIDIA GeForce RTX 5060 Ti (15.93 GB VRAM) and 64 GB of system RAM, using the popular ComfyUI v0.18.2 interface. This data is vital for practitioners who need to maximize throughput or run Wan 2.2 alongside other memory-intensive applications, effectively extending the capabilities of their existing hardware.

Key Points
  • Torch 2.11 reduces Wan 2.2 RAM usage by 3.79% (63.4 GB to 61 GB) and VRAM by 3.67% (35.4 GB to 34.1 GB).
  • Sage-Attn 2.2 'FAST' mode cuts video generation time in half, from ~544 seconds to ~280 seconds.
  • Performance gains are consistent across Python versions 3.10 through 3.14, making the Torch upgrade universally beneficial.

Why It Matters

Enables more efficient high-end AI video generation, allowing users to run complex models on existing hardware or free up memory for multi-model workflows.