A Causal Approach to Predicting and Improving Human Perceptions of Social Navigation Robots
New causal AI model improves how robots move around people, boosting perceived competence by 83%.
A research team from Yale University and the Technical University of Munich has developed a novel AI framework that helps robots navigate social spaces more effectively by understanding human perception. The system uses a Causal Bayesian Network to predict whether humans will perceive a robot as competent and understand its intentions during navigation. Unlike traditional associative models, this causal approach allows the robot to explain its reasoning and identify specific, controllable factors in its behavior that can be adjusted to improve human perception.
The model achieved strong predictive performance with F1-scores of 0.78 for competence and 0.75 for intent on a binary scale, matching or exceeding state-of-the-art methods. Crucially, the team introduced a combinatorial search method guided by the causal model to generate improved robot motions. In an online evaluation where users rated robot behaviors on a 5-point Likert scale, this method statistically increased the perceived competence of low-competent robot behaviors by a remarkable 83%. This means robots that previously appeared confused or incompetent could be made to seem significantly more capable through algorithmic adjustments to their movement patterns.
This research addresses two critical challenges in human-robot interaction: learning from limited data and maintaining interpretability for safe deployment. By focusing on causal relationships rather than correlations, the system can generate counterfactual explanations (e.g., 'If I had moved differently, you would have understood my goal') and pinpoint exact navigation adjustments. This represents a significant step toward robots that can not only navigate physical spaces but also manage the social dimensions of shared environments.
- Uses a Causal Bayesian Network to predict human perceptions of robot competence and intent with F1-scores of 0.78 and 0.75.
- A novel combinatorial search method, guided by the causal model, improved the perceived competence of poor robot behaviors by 83% in user tests.
- Provides interpretable, causal reasoning so robots can explain their actions and identify specific behaviors to adjust, moving beyond black-box predictions.
Why It Matters
Enables safer, more trustworthy deployment of service and delivery robots in hospitals, offices, and public spaces by making them appear more competent to humans.