Trust as a Situated User State in Social LLM-Based Chatbots: A Longitudinal Study of Snapchat's My AI
27 users tracked for 4 weeks reveal trust is a dynamic negotiation...
A new longitudinal study from researchers at the University of Gothenburg (Landerberg, Flatmo, and Said) tracked 27 users of Snapchat's My AI chatbot over four weeks to understand how trust develops in social LLM-based chatbots. The study, accepted to the 34th ACM International Conference on User Modeling, Adaptation and Personalization (UMAP'26), reveals that trust is not a static evaluation but a dynamic user state shaped by multiple factors including perceived ability, conversational behavior, human-likeness, transparency, privacy concerns, and trust in the host platform.
The research shows that trust evolves continuously through interaction. Users adapt their expectations, refine their prompting strategies, and actively regulate how and when they rely on the system. While conversational fluency initially supports engagement, excessive anthropomorphism and limited transparency can erode trust over time. The authors synthesize these findings into a conceptual model that frames trust as a situated user state, with implications for designing more adaptive and human-centered conversational agents. This challenges the assumption that chatbots simply need to be more human-like to build lasting trust.
- Trust in Snapchat's My AI is shaped by 6 factors: ability, conversation, human-likeness, transparency, privacy, and platform trust
- Trust evolves through interaction as users adapt expectations and refine prompting strategies
- Excessive anthropomorphism and limited transparency can undermine trust over time
Why It Matters
Chatbot designers must balance human-likeness with transparency to sustain user trust long-term.