"What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing
A new study shows explaining 'what' an AI is doing during multi-step tasks improves user experience by 40%.
Researchers from the University of Augsburg and LMU Munich published a paper on agentic LLM in-car assistants. Their study (N=45) found that providing intermediate feedback on planned steps and results, rather than silent operation, significantly improved perceived speed, trust, and user experience while reducing cognitive load. The findings suggest AI assistants should start with high transparency to build trust, then adapt verbosity based on task complexity and user context.
Why It Matters
This provides a crucial UX blueprint for designing trustworthy, next-generation AI agents in cars, homes, and workplaces.