AD$^2$: Analysis and Detection of Adversarial Threats in Visual Perception for End-to-End Autonomous Driving Systems
A simple sticker could cause autonomous vehicles to fail catastrophically.
A new study reveals critical vulnerabilities in leading self-driving AI systems like Transfuser and Interfuser. Under three types of visual attacks—acoustic blur, electromagnetic interference, and digital ghost objects—the agents' driving performance dropped by up to 99% in the CARLA simulator. Researchers also propose a lightweight detection model called AD² that uses attention mechanisms to spot these threats with superior accuracy and efficiency, aiming to address major safety concerns for autonomous vehicles.
Why It Matters
This exposes a massive safety flaw that must be fixed before real-world deployment of autonomous vehicles.