Viral 'Three-Finger Test' Emerges for Deepfake Detection in Video Calls
A simple trick to hold up three fingers is going viral as a real-time deepfake detector for video calls.
A simple, viral technique dubbed the 'three-finger test' is being shared online as a DIY method to detect deepfake scams during live video calls. Popularized by social media accounts like Huntress Labs, the method instructs users to ask the person on the call to hold three fingers in front of their face. The logic is that this action can disrupt the real-time AI-generated overlay, potentially causing it to glitch and reveal the scammer's real face underneath. This trend highlights growing public concern as deepfake technology moves from entertainment to sophisticated financial fraud.
However, cybersecurity experts are issuing strong cautions against relying on this method. Ben Colman, CEO of deepfake detection firm Reality Defender, notes that while such visual disruption tricks were more effective a couple of years ago, the underlying generative AI models have evolved. Modern deepfake agents can now adapt in real-time to occlusions and movements, rendering the finger test and similar checks like asking someone to turn their head increasingly unreliable. Manny Ahmed, CEO of OpenOrigins, argues that false confidence from these outdated methods is arguably more dangerous than having no check at all, as it creates a deceptive sense of security.
The viral spread of this test is a double-edged signal. On one hand, it shows a proactive public seeking tools to combat AI-powered scams. On the other, its declining effectiveness is a stark indicator of the terrifying pace of AI advancement. The episode underscores that while public awareness is crucial, combating sophisticated deepfakes requires continuous investment in professional detection tools and updated security protocols, not just viral life hacks.
- The 'three-finger test' asks video call participants to hold fingers to their face to disrupt and glitch a potential AI deepfake overlay.
- Experts from Reality Defender and OpenOrigins warn the method is no longer foolproof, as modern AI models can adapt in real-time.
- The trend's viral nature highlights public fear of deepfake scams but also the rapid advancement of generative AI technology.
Why It Matters
As deepfake scams target professionals over video calls, understanding the limits of simple detection tricks is critical for security.