Adapting Segment Anything Model 3 for Concept-Driven Lesion Segmentation in Medical Images: An Experimental Study
Meta's Segment Anything Model 3 achieves strong generalization for concept-driven medical image analysis.
A research team from institutions including the University of Pennsylvania has published a comprehensive study evaluating Meta's Segment Anything Model 3 (SAM3) for medical lesion segmentation. The work addresses a critical limitation in medical AI: most existing segmentation models are designed for specific anatomical sites or imaging modalities, requiring extensive retraining for new applications. The researchers systematically tested SAM3's ability to perform concept-driven segmentation using text prompts (like "tumor" or "lesion") and image prompts across diverse medical imaging data.
The experimental study covered an impressive range of 13 datasets encompassing 11 different lesion types across five imaging modalities: multiparametric MRI, CT, ultrasound, dermoscopy, and endoscopy. To enhance robustness, the team incorporated additional medical priors including adjacent-slice predictions from 3D scans, multiparametric information fusion, and existing annotation knowledge. They also compared different fine-tuning strategies including adapter-based methods and full-model optimization to determine the most efficient approach for medical adaptation.
Results demonstrate that SAM3 achieves strong cross-modality generalization, reliably segmenting lesions based on conceptual prompts rather than requiring modality-specific training. The model showed accurate lesion delineation across diverse medical contexts, from brain tumors in MRI to skin lesions in dermoscopy images. This research represents a significant step toward more flexible, generalizable medical AI systems that can adapt to new imaging technologies and clinical concepts without complete retraining, potentially accelerating AI deployment in healthcare settings.
- Tested Meta's SAM3 on 13 medical datasets covering 11 lesion types across 5 imaging modalities
- Achieved strong cross-modality generalization using concept-based text and image prompts rather than modality-specific training
- Incorporated medical priors like 3D adjacent-slice predictions and multiparametric fusion to improve clinical robustness
Why It Matters
Enables more flexible medical AI that can adapt to new imaging tech and clinical concepts without complete retraining, accelerating deployment.