Research & Papers

FedTrident: Resilient Road Condition Classification Against Poisoning Attacks in Federated Learning

New defense system thwarts malicious data attacks in federated learning, outperforming 8 baselines by up to 9.5%.

Deep Dive

Researchers Sheng Liu and Panos Papadimitratos have developed FedTrident, a novel security framework designed to protect federated learning systems in autonomous vehicles from targeted poisoning attacks. The system specifically addresses Targeted Label-Flipping Attacks (TLFAs), where malicious vehicle clients deliberately submit false training data—such as labeling uneven roads as smooth—to corrupt the global AI model used for road condition classification. This type of attack poses significant safety risks as it can degrade the performance of critical transportation AI.

FedTrident introduces three key innovations to combat these threats: neuron-wise analysis for detecting malicious local models, adaptive client rating for excluding bad actors based on historical behavior, and machine unlearning to remediate the corrupted global model after malicious clients are removed. The system was extensively evaluated across diverse federated learning configurations and demonstrated resilience against various attack scenarios, maintaining performance comparable to attack-free environments while outperforming eight baseline countermeasures by 9.49% and 4.47% on the two most critical metrics.

The framework's effectiveness extends to handling different malicious client rates, data heterogeneity levels, and even complex multi-task and dynamic attacks. This represents a significant advancement in securing collaborative AI systems where multiple participants contribute to model training without sharing raw data. The research addresses a critical gap in current federated learning security, which often fails to maintain resilient performance when facing sophisticated poisoning attacks tailored to specific applications like transportation safety.

Key Points
  • Defends against targeted label-flipping attacks where vehicles submit false road condition data
  • Outperforms eight existing security countermeasures by 9.49% and 4.47% on critical metrics
  • Uses neuron-wise analysis, adaptive client rating, and machine unlearning for comprehensive protection

Why It Matters

Enables safer autonomous vehicle AI by securing collaborative learning systems against data manipulation that could cause accidents.