A Unified Deep Learning Framework for Motion Correction in Medical Imaging
A single model trained on fetal MRI now works on lung CT and brain tumors without retraining.
Motion artifacts plague medical imaging, but current deep learning solutions are often limited to specific motion types or require retraining for new modalities. To address this, researchers introduce UniMo (Unified Motion Correction), a deep learning framework that handles both bulk rigid motion and local deformations in a single model. UniMo employs an alternating optimization scheme for a unified loss function, integrating an equivariant neural network for global rigid motion correction and an encoder-decoder network for local deformations. A geometric deformation augmenter enhances robustness by addressing local deformations and generating augmented training data.
UniMo was trained on fetal MRI and then tested without any retraining on three public datasets: MedMNIST, lung CT, and BraTS (brain tumor segmentation). Results show it surpasses existing motion correction methods in accuracy while maintaining stability across drastically different imaging modalities. By enabling one-time training on a single modality with zero-shot generalization, UniMo offers a practical solution for clinical workflows where diverse imaging data is common, reducing the need for per-dataset model retraining.
- UniMo combines global rigid motion correction (equivariant NN) with local deformation correction (encoder-decoder) in a unified alternating optimization framework.
- Trained once on fetal MRI, it generalized to MedMNIST, lung CT, and BraTS without retraining, outperforming existing methods in accuracy.
- A geometric deformation augmenter boosts robustness by simulating local deformations during training, improving generalization across modalities.
Why It Matters
UniMo could drastically reduce retraining overhead for radiologists, enabling a single model to correct motion across diverse scans.