rSDNet: Unified Robust Neural Learning against Label Noise and Adversarial Attacks
A single algorithm tackles two major AI vulnerabilities, improving robustness on benchmark datasets.
Researchers Suryasis Jana and Abhik Ghosh have introduced rSDNet, a novel framework that provides a unified defense against two of the most persistent problems in machine learning: label noise and adversarial attacks. Traditional neural networks trained with standard cross-entropy loss are highly vulnerable to these issues—label noise corrupts the training signal, while adversarial perturbations cause models to fail on subtly altered inputs. rSDNet reformulates the entire training process as a minimum-divergence estimation problem, leveraging the robust statistical properties of S-divergences. This approach automatically identifies and down-weights aberrant observations during training, creating a single, principled objective that hardens models against both types of contamination from the ground up.
The paper establishes rigorous theoretical guarantees for rSDNet, including Fisher consistency and classification calibration, which ensure the model converges to an optimal solution. Crucially, it provides proven robustness guarantees under conditions of uniform label noise and infinitesimal feature contamination. In practical tests across three standard image classification datasets, rSDNet demonstrated its dual-purpose strength: it maintained accuracy competitive with standard models on clean data while significantly improving performance when faced with corrupted labels or adversarial inputs. This work positions minimum-divergence learning as a statistically grounded and effective paradigm for building AI systems that are reliable in real-world, messy data environments.
- Unified framework tackles both label noise (corrupted training labels) and adversarial attacks (malicious input perturbations) within a single training objective.
- Based on minimum-divergence estimation using S-divergences, providing automatic down-weighting of corrupted data points and proven theoretical robustness guarantees.
- Validated on three benchmark image datasets, maintaining competitive clean-data accuracy while significantly improving resilience to contamination.
Why It Matters
It provides a single, principled method to build more reliable and secure AI models that perform well on messy, real-world data.