Impact of Different Failures on a Robot's Perceived Reliability
A new study reveals which robot failures damage human trust most—and how to recover it.
A research team from Cornell University and other institutions has published a new study, 'Impact of Different Failures on a Robot's Perceived Reliability,' accepted to the 2026 IEEE International Conference on Robotics and Automation (ICRA). The paper investigates how specific types of robotic failures affect human trust, measured as Perceived Reliability (PR), and how that trust can be recovered. In a preregistered online experiment, participants watched videos of a robot performing a pick-and-place task and then bet real money on its future success versus a coin toss.
The study tested five failure types: manipulation 'slips' (e.g., dropping an object), 'lapses' (the robot freezing), and three kinds of 'mistakes' (picking or placing the wrong object). The key finding was that not all failures are equal. 'Mistakes' were significantly less damaging to PR than 'slips' or 'lapses,' with some mistakes even being perceived as successes. Crucially, the research showed that a single successful execution immediately after a failure restored PR to the same level as if no failure had occurred, suggesting trust can be rebuilt without complex social apologies from the robot.
These findings provide actionable data for roboticists and HRI designers. They highlight which failure modes—specifically slips and lapses—are in highest need of robust technical fixes or explicit repair strategies during human-robot interaction. The work shifts focus from treating all failures equally to prioritizing reliability repairs based on their actual psychological impact on human operators, a critical step for deploying robots in collaborative environments.
- Mistakes (wrong picks/places) hurt perceived reliability less than slips (drops) or lapses (freezes).
- A single success after any failure fully recovers trust, matching pre-failure levels.
- The study used a real monetary betting system to quantitatively measure human trust in robots.
Why It Matters
Helps engineers prioritize fixing the robot failures that most damage human trust in collaborative workplaces.