Image & Video

Nuclear Diffusion Models for Low-Rank Background Suppression in Videos

New hybrid method beats RPCA on cardiac ultrasound by combining diffusion with low-rank decomposition.

Deep Dive

Researchers from TU Eindhoven and GE HealthCare have introduced Nuclear Diffusion, a novel hybrid framework that integrates low-rank temporal modeling with diffusion posterior sampling to suppress structured noise and background artifacts in video sequences. Traditional robust principal component analysis (RPCA) methods decompose video data into low-rank and sparse components, but the sparsity assumption often fails to capture the rich variability present in real-world video data. Nuclear Diffusion overcomes this by blending model-based temporal priors with deep generative diffusion models, enabling more accurate separation of dynamic content from background noise.

The method was evaluated on a real-world medical imaging task—cardiac ultrasound dehazing—and demonstrated improved performance over traditional RPCA in terms of contrast enhancement (measured by generalized contrast-to-noise ratio, gCNR) and signal preservation (measured by Kolmogorov-Smirnov statistic). The paper, accepted at ICASSP 2026, is available on arXiv (2509.20886). This approach has broad implications for video restoration in medical imaging, surveillance, and autonomous systems where high-fidelity dynamic content is critical.

Key Points
  • Nuclear Diffusion combines low-rank temporal modeling with diffusion posterior sampling for video background suppression.
  • Outperforms traditional RPCA on cardiac ultrasound dehazing, improving contrast (gCNR) and signal preservation (KS statistic).
  • Accepted at IEEE ICASSP 2026; code and paper available on arXiv (2509.20886).

Why It Matters

This hybrid approach could dramatically improve medical video clarity, enabling better diagnosis from ultrasound and other real-time imaging.