Can Virtual Agents Care? Designing an Empathetic and Personalized LLM-Driven Conversational Agent
A cross-cultural study shows the agent outperforms standard LLMs in empathy and coherence.
Researchers from Vietnam and Australia have introduced a virtual agent framework designed to provide empathetic, personalized, and reliable wellbeing support. The system leverages a retrieval-augmented architecture (RAG), structured memory, and multimodal interaction to overcome common limitations of large language models (LLMs) in mental health contexts, such as lack of personalization, empathy, and factual grounding. Objective benchmarks demonstrated improved retrieval and response quality, particularly for smaller models.
In a cross-cultural study with university students from Vietnam and Australia, the system outperformed LLM-only baselines in coherence, perceived accuracy, and empathy. Most participants expressed a clear preference for the proposed approach. The research was accepted for presentation at SCI-2026 and is available on arXiv.
- The framework uses RAG, structured memory, and multimodal interaction for personalized support.
- Objective benchmarks showed improved retrieval and response quality, especially for smaller models.
- A cross-cultural study with students from Vietnam and Australia found the system outperformed LLM-only baselines in empathy and coherence.
Why It Matters
This offers a scalable, empathetic alternative for mental health support, addressing personalization and accuracy gaps.