Research & Papers

DarkDriving: A Real-World Day and Night Aligned Dataset for Autonomous Driving in the Dark Environment

Researchers solve a major data gap with centimeter-precise alignment of day and night driving scenes.

Deep Dive

A research team led by Wuqi Wang has published the DarkDriving dataset, a groundbreaking resource for training and evaluating autonomous driving systems in low-light conditions. The core innovation is the dataset's precise alignment: it contains 9,538 pairs of images where each nighttime scene is perfectly matched to a daytime counterpart from the exact same location and perspective, with an alignment error of only several centimeters. This was achieved using a novel automatic Trajectory Tracking based Pose Matching (TTPM) method within a massive 69-acre closed test field, solving a critical data collection challenge that has long hampered research in this area.

DarkDriving is designed as a comprehensive benchmark, introducing four key perception tasks: low-light image enhancement, generalized enhancement, and enhancement specifically for 2D and 3D object detection. Each image pair comes with manually labeled 2D bounding boxes, allowing developers to directly test how well their vision models and enhancement algorithms perform when translating from day to night. The dataset's real-world, dynamic driving scenes fill a major gap left by previous datasets, which were limited to static scenes or lacked precise temporal alignment.

Initial experimental results indicate that DarkDriving provides a robust and challenging testbed. It not only benchmarks low-light enhancement for autonomous driving but also shows promise for generalizing to enhance dark images and improve detection in other established low-light driving environments like nuScenes. By providing this meticulously aligned data, the team aims to accelerate research into making vision-based autonomous systems as reliable at night as they are during the day, a crucial step for real-world deployment.

Key Points
  • Contains 9,538 precisely aligned day-night image pairs with centimeter-level accuracy, collected over a 69-acre test field.
  • Introduces four benchmark tasks for low-light enhancement and object detection, critical for autonomous driving safety.
  • Solves a major data gap; previous datasets lacked precise alignment for dynamic, real-world driving scenes.

Why It Matters

Provides the essential, high-quality data needed to train AI that can drive safely at night, a major hurdle for real-world autonomous vehicles.