Research & Papers

Anatomy-Aware Unsupervised Detection and Localization of Retinal Abnormalities in Optical Coherence Tomography

Unsupervised model spots abnormalities in OCT scans with 0.884 AUROC...

Deep Dive

A team from UNC Charlotte has developed an anatomy-aware unsupervised framework that detects and localizes retinal abnormalities in Optical Coherence Tomography (OCT) scans without requiring expensive expert annotations. Led by Tania Haghighi, the approach trains a discrete latent model exclusively on healthy B-scans to learn normative retinal anatomy patterns. It incorporates retinal layer-aware supervision and structured triplet learning to separate healthy from pathological representations, enabling robust performance across varied imaging devices and patient populations.

On the widely used Kermany dataset, the method achieves an AUROC of 0.799, substantially outperforming baseline models like VAE, VQVAE, VQGAN, and f-AnoGAN. Critically, cross-dataset evaluation on the Srinivasan dataset reaches AUROC 0.884, demonstrating superior generalization. On the external RETOUCH benchmark, it achieves competitive Dice (0.200) and mIoU (0.117) scores for anomaly segmentation, validating reproducibility across institutions. The work, accepted at CVPR-CV4Clinical, directly addresses the critical barrier of annotation scarcity in clinical AI deployment.

Key Points
  • Unsupervised framework learns healthy retinal anatomy from normal OCT scans without lesion annotations
  • Achieves AUROC 0.884 on cross-dataset evaluation (Srinivasan), outperforming VAE, VQVAE, VQGAN, and f-AnoGAN baselines
  • Competitive anomaly segmentation on RETOUCH benchmark with Dice 0.200 and mIoU 0.117

Why It Matters

Enables scalable, annotation-free retinal disease screening across diverse clinical settings and devices.