Research & Papers

Loss Knows Best: Detecting Annotation Errors in Videos via Loss Trajectories

A new model-agnostic technique analyzes training loss trajectories to flag mislabeled or disordered video frames.

Deep Dive

Researchers from Purdue University and Georgia Tech developed 'Loss Knows Best,' a model-agnostic method for detecting annotation errors in video datasets. It analyzes Cumulative Sample Loss (CSL) trajectories across training epochs, where frames with persistently high loss indicate likely mislabeling or temporal disordering. Tested on EgoPER and Cholec80, it effectively identifies subtle inconsistencies without requiring ground truth, providing a powerful tool for dataset auditing and improving training reliability for video segmentation models.

Why It Matters

Cleans noisy training data, boosting model performance for critical applications like medical video analysis and action recognition.