DiffNR: Diffusion-Enhanced Neural Representation Optimization for Sparse-View 3D Tomographic Reconstruction
New method fixes CT artifacts in one step, not 1,000 iterations.
A new framework called DiffNR, developed by researchers at the Australian National University and collaborators, tackles a persistent challenge in computed tomography (CT): reconstructing high-quality 3D volumes from sparse-view data. Neural representations like neural fields and 3D Gaussians have been used for volumetric modeling, but they produce severe artifacts when only limited projection angles are available. DiffNR addresses this by integrating a single-step diffusion model named SliceFixer, which is designed to correct artifacts in degraded 2D slices without the computational overhead of traditional iterative denoising methods.
DiffNR's key innovation is its "repair-and-augment" strategy. During reconstruction, SliceFixer periodically generates pseudo-reference volumes that provide auxiliary 3D perceptual supervision, effectively fixing underconstrained regions. This approach avoids frequent diffusion model queries, leading to better runtime performance compared to prior methods that embed CT solvers into time-consuming iterative denoising loops. The model uses specialized conditioning layers and tailored data curation strategies to support fine-tuning. In extensive experiments, DiffNR achieved an average PSNR improvement of 3.99 dB over baseline methods and demonstrated strong generalization across different domains. The paper has been accepted to AAAI 2026.
- DiffNR integrates SliceFixer, a single-step diffusion model that corrects artifacts in degraded CT slices.
- Achieves an average PSNR improvement of 3.99 dB over baseline neural representation methods.
- Uses a repair-and-augment strategy to avoid frequent diffusion model queries, improving runtime performance.
Why It Matters
Faster, higher-quality 3D CT reconstruction from sparse data could reduce radiation exposure and scan times in medical imaging.