Privacy and Safety Experiences and Concerns of U.S. Women Using Generative AI for Seeking Sexual and Reproductive Health Information
18 women disclosed intimate details to chatbots, accepting risks for reproductive health info.
A new study led by researchers from King's College London and Carnegie Mellon University reveals the complex privacy trade-offs U.S. women make when using generative AI for sensitive health information. The team, including Ina Kaleva and Jose Such, conducted semi-structured interviews with 18 participants from both restrictive and non-restrictive states. They found that following the overturning of Roe v. Wade, individuals are increasingly turning to chatbots like ChatGPT for sexual and reproductive health (SRH) guidance, driven by factors like perceived utility, accessibility, and the anthropomorphic nature of the tools.
Participants reported disclosing highly sensitive personal details to these AI systems, despite identifying significant privacy risks. These included fears of excessive data collection by companies, potential government surveillance, profiling, and the use of their intimate conversations for model training or data commodification. Notably, while most accepted these risks in exchange for the information they needed, queries related to abortion elicited heightened safety concerns. Few users employed protective strategies beyond minimizing disclosures or deleting chat histories.
Based on these findings, the researchers, who submitted their paper to the CHI conference, offer critical design and policy recommendations. They advocate for the development of health-specific AI features with stronger built-in privacy safeguards and more robust content moderation practices. The study underscores a pressing need to shift from a purely model-centered AI development approach to one that prioritizes user safety and privacy, especially for vulnerable populations seeking essential health information in a changing legal landscape.
- 18 U.S. women interviewed disclosed sensitive SRH details to AI chatbots like ChatGPT, accepting privacy risks for utility.
- Key concerns included data collection for training, government surveillance, and profiling, with abortion queries raising the highest safety alarms.
- The study calls for new health-specific AI features and stronger moderation to protect users in post-Roe v. Wade America.
Why It Matters
Highlights a critical privacy gap as vulnerable users trade personal data for essential health info, demanding safer AI design.