Understanding User Perceptions of Human-centered AI-Enhanced Support Group Formation in Online Healthcare Communities
New research shows 91.5% of patients would join AI-matched support groups, but only with strict privacy controls.
A new study from researchers at the University of Maryland, Baltimore County reveals a powerful yet cautious public appetite for AI-driven support in healthcare communities. The paper, titled "Understanding User Perceptions of Human-centered AI-Enhanced Support Group Formation in Online Healthcare Communities," surveyed 165 participants from online health communities (OHCs) to gauge interest in algorithmically personalized peer groups. The results were striking: 91.5% of respondents said they would join a simulated AI-matched support group, and the perceived value was rated very high at a mean of 4.55 out of 5. The importance participants placed on accurate peer matching showed a strong positive correlation with this perceived value (ρ=0.764).
However, the research team, led by Pronob Kumar Barman, James R. Foulds, and Tera L. Reynolds, found this enthusiasm is heavily conditional. Qualitative analysis revealed a clear set of non-negotiable requirements for user acceptance. Participants consistently demanded robust data security, complete transparency in how the AI algorithms function, meaningful human oversight of the matching process, and direct user control over their personal health data. The study concludes that while the potential value of personalized support groups is immense for managing chronic conditions, adoption will be blocked unless developers and platform operators proactively address these fundamental concerns around trust, privacy, and algorithmic governance.
- 91.5% of surveyed patients and caregivers would join an AI-personalized support group, showing massive latent demand.
- Perceived value of the AI-matched groups was very high (mean 4.55/5), strongly tied to quality of peer matching (ρ=0.764).
- Acceptance is conditional on four pillars: data security, algorithmic transparency, human oversight, and user data control.
Why It Matters
This research provides a blueprint for building trusted AI health tools that patients will actually use, moving beyond pure capability to address adoption barriers.