Research & Papers

Simulating Infant First-Person Sensorimotor Experience via Motion Retargeting from Babies to Humanoids

Single video now recreates infant sensorimotor experience on humanoid robots.

Deep Dive

A new paper from an international research team introduces a method to simulate an infant's first-person sensorimotor experience by retargeting their movements onto humanoid platforms. Using only a single video, the framework extracts the infant's skeletal structure and full 3D pose per frame, then maps that motion onto the physical iCub robot and three virtual simulators. The result is a rich, multisensory data stream including joint angles, tactile feedback, and visual input—effectively letting researchers "feel" what a baby experiences during natural movement.

On the best-matching embodiment, the retargeting achieves sub-centimeter accuracy, enabling detailed multimodal analysis of typical and atypical development. The authors highlight applications in developmental neuroscience, robotics training, and early screening for disorders like autism or cerebral palsy. The framework also automates behavioral annotation, reducing manual labor. The code is open-source, making the tool accessible to labs worldwide. This work bridges robotics and developmental science, offering a unique window into early human learning.

Key Points
  • Framework extracts 3D infant pose from a single video and retargets it to physical iCub robot and virtual simulators pyCub, EMFANT, and MIMo.
  • Generates multimodal sensorimotor streams including proprioception, touch, and vision with sub-centimeter accuracy.
  • Enables automated behavioral annotation and potential early detection of neurodevelopmental disorders like autism.

Why It Matters

Brings human-like sensorimotor experience into robotics, opening new paths for studying infant development and early diagnosis.