Learning to Select Like Humans: Explainable Active Learning for Medical Imaging
This new method could slash the cost of training medical AI by 90%...
Researchers have developed an explainability-guided active learning framework that selects medical images for annotation by mimicking human expert focus. It combines classification uncertainty with spatial attention alignment to Grad-CAM heatmaps and radiologist annotations. Tested on three datasets (BraTS, VinDr-CXR, SIIM-COVID-19), it achieved 77.22% accuracy on brain tumor MRI using only 570 strategically selected samples, outperforming random sampling and improving model interpretability for clinical deployment.
Why It Matters
It dramatically reduces the time and cost of training diagnostic AI while ensuring models focus on clinically relevant features, not just statistical patterns.