WeatherSeg: Weather-Robust Image Segmentation using Teacher-Student Dual Learning and Classifier-Updating Attention
A new framework boosts autonomous driving vision in any weather without extra labels.
WeatherSeg, detailed in a new arXiv paper (2604.22824), tackles a critical bottleneck in autonomous driving: maintaining reliable environmental perception in adverse weather. The framework's core innovation is a Dual Teacher-Student Weight-Sharing Model (DTSWSM), which enables efficient knowledge distillation from weather-degraded images without requiring expensive, manually annotated weather data. This allows the student model to learn robust features from the teacher's predictions, effectively generalizing across diverse conditions.
Complementing this is the Classifier Weight Updating Attention Mechanism (CWUAM), which dynamically adjusts classifier weights based on detected environmental attributes (e.g., fog density, rain intensity). This adaptive attention ensures the model focuses on relevant features rather than being confused by weather artifacts. In comprehensive evaluations, WeatherSeg significantly outperformed baseline segmentation models in both accuracy and robustness across clear, rainy, cloudy, and foggy scenarios, establishing it as an effective, cost-efficient solution for all-weather semantic segmentation in autonomous driving and related applications.
- Uses Dual Teacher-Student Weight-Sharing Model (DTSWSM) for knowledge distillation from weather-affected images, reducing annotation costs.
- Incorporates Classifier Weight Updating Attention Mechanism (CWUAM) to dynamically adapt classifier weights based on environmental attributes like fog and rain.
- Outperforms baseline models in accuracy and robustness across clear, rainy, cloudy, and foggy conditions for autonomous driving perception.
Why It Matters
WeatherSeg makes autonomous driving safer and cheaper by enabling robust vision in all weather without costly manual labels.