Image & Video

Unsupervised Adaptation from FDG to PSMA PET/CT for 3D Lesion Detection under Label Shift

Researchers' unsupervised method improves lesion detection by 20% when switching from FDG to PSMA PET scans.

Deep Dive

A research team from institutions including Yale and Massachusetts General Hospital has developed a novel AI framework for medical imaging that tackles a critical bottleneck in cancer diagnostics. Their paper, "Unsupervised Adaptation from FDG to PSMA PET/CT for 3D Lesion Detection under Label Shift," presents a method to adapt a lesion detection model trained on one type of PET/CT scan (using the FDG tracer) to work effectively on another type (using the PSMA tracer) without needing expensive, manually labeled data for the new domain. This addresses the practical challenge where abundant labeled data exists for common scans like FDG-PET, but new, promising tracers like PSMA (which is highly specific for prostate cancer) lack large annotated datasets.

The core innovation lies in two mechanisms within a self-training pipeline that explicitly model and compensate for "label shift." This shift occurs because lesions appear differently in size, number, and composition between tracer types. First, the method dynamically adjusts the AI's detection "anchor" shapes by re-estimating target domain box scales from high-confidence pseudo-labels, using an exponential moving average for stability. This better captures smaller PSMA lesions. Second, instead of a single confidence threshold, it allocates pseudo-label quotas based on estimated size distributions, preventing the model from being biased toward only the largest, most obvious lesions.

Evaluated on the AutoPET 2024 challenge data, adapting from 501 labeled FDG studies to 369 unlabeled PSMA studies, their framework showed measurable improvements in both Average Precision (AP) and Free-Response Operating Characteristic (FROC) scores over a source-only model and conventional self-training. This demonstrates that directly modeling target-domain lesion prevalence and size is a more effective path to robust AI for cross-tracer medical image analysis, moving beyond simpler assumptions of domain adaptation.

Key Points
  • Enables AI models trained on FDG-PET/CT (common tracer) to detect lesions on PSMA-PET/CT (prostate-specific) without new manual labels.
  • Uses two novel self-training mechanisms: adaptive anchor reshaping and size bin-wise quota allocation to handle label shift in lesion size/count.
  • Tested on 369 PSMA studies, it improved detection metrics (AP/FROC) over baselines, proving robust cross-tracer adaptation is feasible.

Why It Matters

Accelerates deployment of AI diagnostics for new medical imaging tracers, reducing reliance on scarce expert-labeled data.