Robotics

Trust in Autonomous Human--Robot Collaboration: Effects of Responsive Interaction Policies

Autonomous social robots using responsive dialogue policies build significantly higher trust during collaborative tasks.

Deep Dive

Researchers Shauna Heron and Meng Cheng Lau have published a new study (arXiv:2603.00154) investigating a critical factor for real-world robotics: how to build human trust during fully autonomous collaboration. The pilot study moved beyond scripted or Wizard-of-Oz controlled interactions, testing a mobile social robot that autonomously managed spoken-language dialogue, affect inference, and task progression during a multi-stage collaborative task with human participants. The key finding was that a 'responsive' interaction policy—where the robot proactively adapted its dialogue and assistance based on inferred interaction state—resulted in significantly higher post-interaction trust compared to a 'neutral, reactive' policy that provided only direct, task-relevant responses.

The technical details reveal important nuances for AI and robotics developers. While the responsive policy boosted trust, this advantage was contingent on viable communication; as language-mediated interaction degraded, the trust benefit attenuated. Sensitivity analyses showed that affective and experiential components of trust were more sensitive to communication breakdowns than evaluative judgments of reliability. This underscores that trust is not just about task success but about the quality of the interactive process itself. The study's major implication is that designing for trust requires evaluating systems under true autonomy, as process-level dynamics are key to building calibrated, sustainable human-robot partnerships.

Key Points
  • Responsive interaction policy (proactive adaptation) built significantly higher trust than a neutral reactive policy.
  • Trust advantage was contingent on viable communication, attenuating as language interaction degraded.
  • Affective/experiential trust components were more sensitive to breakdowns than judgments of reliability.

Why It Matters

For deploying effective collaborative robots in workplaces, designing AI that reads and adapts to human state is crucial for trust.