Dance2Hesitate: A Multi-Modal Dataset of Dancer-Taught Hesitancy for Understandable Robot Motion
A new open-source dataset uses 220+ dancer-taught trajectories to make robot motion more understandable and safe.
A team from the University of Colorado Boulder and Yale has open-sourced Dance2Hesitate, a novel dataset designed to teach robots how to express human-like hesitancy. The core challenge in human-robot collaboration is that a robot's rigid, confident motions can be confusing or unsafe; humans rely on subtle cues of uncertainty to coordinate. This dataset tackles that by capturing 220+ unique motion trajectories where professional dancers and a robot arm (a Franka Emika Panda) performed a reaching task toward a Jenga tower with three graded levels of hesitancy: slight, significant, and extreme.
The dataset is multi-modal, containing synchronized RGB-D motion capture of the dancers' upper limbs and whole bodies, alongside direct kinesthetic teaching recordings from the robot. This allows for reproducible benchmarking across both human and robot modalities. By providing this rich, context-specific data (focused on a manipulator approaching a tower and anthropomorphic motion in free space), the researchers aim to solve the generalization problem—where a hesitant motion that works for one robot embodiment fails in another. The work was accepted at the ACM/IEEE HRI 2026 workshop on Designing Transparent and Understandable Robots.
- Contains 220+ motion trajectories: 70 whole-body, 84 upper-limb (from dancers), and 66 robot kinesthetic demos across three hesitancy levels.
- Focuses on specific context-embodiment pairs: a robot arm/human arm approaching a Jenga tower and full-body motion in free space.
- Aims to improve human-robot collaboration by making robot uncertainty legible, shaping human attention, coordination strategies, and safety judgments.
Why It Matters
Enables safer, more intuitive human-robot teamwork in factories, healthcare, and homes by making robot uncertainty transparent.