Research & Papers

Generalized Disguise Makeup Presentation Attack Detection Using an Attention-Guided Patch-Based Framework

Detects makeup disguises with 0% error on obfuscation attacks, outperforming prior methods.

Deep Dive

Researchers from an academic team have developed a generalized framework for detecting disguise makeup presentation attacks on facial recognition systems. The method uses a two-phase design: first, a style-invariant full-face model trained with metric learning and enhanced by a whitening transformation extracts region attention scores via Grad-CAM. These scores then guide a patch-based phase that performs localized analysis using region-specific subnetworks trained with metric learning for fine-grained discrimination.

The team also constructed a new diverse dataset of live and disguise makeup faces collected under real-world conditions, covering variations in subjects, environments, and disguise materials. Experimental results demonstrate strong generalization, achieving 8.97% ACER and 9.76% EER on the collected dataset, and 0% ACER on Obfuscation and Impersonation and 1.34% on Cosmetics attacks of SIW-Mv2. The proposed method consistently outperforms prior works while maintaining robust performance across other spoof types, addressing a critical vulnerability in facial recognition systems.

Key Points
  • Two-phase framework uses attention-guided patch-based analysis for fine-grained disguise detection
  • Achieves 0% ACER on SIW-Mv2 obfuscation and impersonation attacks
  • New real-world dataset covers diverse subjects, environments, and disguise materials

Why It Matters

Enhances facial recognition security against sophisticated disguise attacks, critical for surveillance and authentication systems.