AI Safety

Evaluating AI-Enabled deception vulnerability amongst Sub-Saharan-Africa migrants

New research shows prior targeting is the strongest predictor of falling for AI-generated deception.

Deep Dive

A new empirical study by researcher Deborah Oluwasanya, published on arXiv, investigates the specific vulnerability of Sub-Saharan African migrants to AI-enabled deception like scams. The research, involving survey data from 31 professionals and migrants across Europe and North America, employed a hybrid Structural Equation Model (SEM) and Multiple Linear Regression (MLR) analysis. It tested the hypothesis that the ability to distinguish human from AI-generated content is directly linked to vulnerability. The core finding was that the strongest predictor of susceptibility was a history of prior targeting, suggesting scammers use calculated, repeat attempts on vulnerable populations.

Crucially, the study identified two significant protective factors that can lower this vulnerability: an individual's confidence in their own ability to identify AI content, and the behavioral characteristic of exerting high effort to verify information. Interestingly, other transnational factors like duration spent abroad or engaging in international fund remittances had only small, statistically insignificant effects on vulnerability. This 22-page study, complete with raw data and R scripts, shifts the focus from broad demographics to specific cognitive and behavioral traits—namely AI literacy and verification habits—as the primary shields against next-generation digital threats.

Key Points
  • Prior exposure to targeting is the strongest indicator of vulnerability to AI-enabled scams, according to SEM/MLR analysis of 31 survey respondents.
  • Confidence in identifying AI content and high verification effort are key protective factors that can lower deception risk.
  • Transnational factors like time abroad or sending remittances had minimal impact on vulnerability, highlighting the importance of AI-specific literacy.

Why It Matters

This research provides a data-driven framework for designing targeted AI literacy programs to protect vulnerable communities from evolving digital threats.