Improving Driver Drowsiness Detection via Personalized EAR/MAR Thresholds and CNN-Based Classification
Personalized eye and mouth tracking boosts detection accuracy by 2-3% over fixed thresholds
Researchers at arXiv (Ersoy et al., 2026) have developed a driver drowsiness detection system that tackles a key flaw in current vision-based monitors: reliance on fixed Eye Aspect Ratio (EAR) and Mouth Aspect Ratio (MAR) thresholds. These fixed values often fail across different facial structures, lighting, and driving conditions. The new system personalizes EAR and MAR thresholds by calibrating them before each drive, then combines these classical metrics with Convolutional Neural Network (CNN) deep learning models for enhanced accuracy. Tested on public datasets and a custom set with varied lighting, head poses, and user characteristics, the personalized approach improves detection accuracy by 2-3% over fixed thresholds. The CNN classifiers achieve 99.1% accuracy for eye state detection and 98.8% for yawning detection, demonstrating robust real-time performance. This hybrid method—mixing simple geometric metrics with deep learning—offers a practical path to reducing traffic accidents caused by driver fatigue, a major global safety threat.
- Personalized EAR/MAR thresholds calibrated before driving improve detection accuracy by 2-3% over fixed thresholds.
- CNN-based classification achieves 99.1% accuracy for eye state detection and 98.8% for yawning detection.
- System monitors eyelid movements, head position, and yawning in real time under diverse lighting and head poses.
Why It Matters
Reduces traffic accidents from driver fatigue with a practical, personalized AI monitoring system.