Culturally Aware GenAI Risks for Youth: Perspectives from Youth, Parents, and Teachers in a Non-Western Context
Saudi study reveals GenAI risks from cultural norms and shared accounts.
A new study on arXiv (arXiv:2604.26494) from researchers Aljawharah Alzahrani, Tory Park, and Tanusree Sharma examines the culturally specific risks of generative AI for youth in non-Western contexts, focusing on Saudi Arabia. The team analyzed 736 Reddit and 1,262 X (Twitter) posts, plus conducted interviews with 31 Saudi participants (8 youth aged 7-17, 13 parents, 10 teachers). Their mixed-methods approach reveals that GenAI interactions often conflict with deeply rooted cultural expectations of modesty, privacy, and family honor.
Key risks identified include youth sharing personal and family information with GenAI tools for emotional support—a practice that clashes with communal norms. Additionally, socioeconomic pressures like cost-saving lead to shared GenAI accounts among family members or even strangers, increasing data exposure. The study provides design implications for parental controls that respect cultural values, moving beyond Western-centric safety models. This work lays groundwork for context-sensitive, culturally aware AI governance in non-Western societies.
- Study analyzed 2,000 posts (736 Reddit, 1,262 X) and interviewed 31 Saudi participants (8 youth, 13 parents, 10 teachers).
- Youth disclose personal/family info to GenAI for emotional support, conflicting with cultural modesty and honor norms.
- Shared GenAI accounts due to cost-saving practices increase privacy risks within families and among strangers.
Why It Matters
Highlights need for GenAI safety tools that respect cultural norms, not just Western values.