Image & Video

Few-Shot Continual Learning for 3D Brain MRI with Frozen Foundation Models

Frozen foundation model with task-specific adapters achieves zero forgetting on brain MRI tasks using <0.1% trainable parameters.

Deep Dive

A research team including Chi-Sheng Chen, Xinyu Zhang, and colleagues has published a breakthrough paper on arXiv titled 'Few-Shot Continual Learning for 3D Brain MRI with Frozen Foundation Models.' The work addresses a critical challenge in medical AI: how to adapt large foundation models to multiple downstream tasks sequentially without catastrophic forgetting, especially when labeled data is scarce. Traditional approaches like full fine-tuning cause severe performance degradation on previous tasks (T1 Dice dropping from 0.80 to 0.16), while linear probing fails on new tasks. The researchers' novel solution combines a frozen pretrained backbone with lightweight, task-specific Low-Rank Adaptation (LoRA) modules that can be added and trained independently for each new medical imaging task.

The technical approach demonstrates remarkable efficiency and effectiveness. For each new task—tested sequentially on tumor segmentation (BraTS) and brain age estimation (IXI)—only a dedicated LoRA adapter and task-specific head are trained while the foundation model remains completely frozen. This architecture achieves zero backward transfer (BWT=0), meaning no forgetting of previous tasks, while using less than 0.1% trainable parameters per task. The system maintained strong performance on both tasks (T1 Dice 0.62±0.07, T2 MAE 0.16±0.05) despite learning from limited labeled examples. While the method showed some systematic age underestimation in the brain age task, it represents a practical solution for deploying medical AI systems that need to continuously learn new diagnostic capabilities without expensive retraining or data storage requirements.

Key Points
  • Achieves zero catastrophic forgetting (BWT=0) by keeping foundation model frozen while training only task-specific LoRA adapters
  • Uses less than 0.1% trainable parameters per task while maintaining strong performance on both tumor segmentation (Dice 0.62) and brain age estimation (MAE 0.16)
  • Solves the continual learning problem where traditional fine-tuning causes severe forgetting (T1 Dice drops from 0.80 to 0.16) and linear probing fails on new tasks

Why It Matters

Enables medical AI systems to continuously learn new diagnostic tasks without forgetting previous capabilities or requiring massive retraining.