AIDA-ReID: Adaptive Intermediate Domain Adaptation for Generalizable and Source-Free Person Re-Identification
New framework adapts to unseen environments using uncertainty-driven feature mixing
Person re-identification (Re-ID) remains a core challenge in computer vision, especially when models trained on one camera setup must work in completely new environments. Domain shifts from lighting, background, and camera differences cause performance to plummet. Existing solutions like IDM and IDM++ create intermediate feature distributions but rely on fixed mixing and joint access to source and target data, limiting their use in real-world, source-free scenarios.
AIDA-ReID overcomes these limitations by treating intermediate-domain learning as a dynamically regulated process. The framework uses an adaptive intermediate domain generator to synthesize diverse representations, guided by feedback signals from model uncertainty and training stability. A pseudo-mirror regularization strategy ensures identity consistency under domain perturbations. Extensive tests show AIDA-ReID excels in both domain generalization and source-free multi-source settings, offering a practical path to robust Re-ID without requiring original training data.
- AIDA-ReID adaptively controls feature mixing and regularization using model uncertainty signals, unlike fixed mixing in IDM/IDM++
- Supports source-free multi-source settings, meaning it works without access to original source data after training
- Uses pseudo-mirror regularization to preserve identity consistency across domain perturbations
Why It Matters
Enables reliable person re-identification in surveillance and security systems across unseen environments without retraining.