Agent Frameworks

HRI-SA: A Multimodal Dataset for Online Assessment of Human Situational Awareness during Remote Human-Robot Teaming

New multimodal dataset from 30 participants enables AI to detect situational awareness gaps with 91.5% recall.

Deep Dive

A research team from Monash University and CSIRO's Data61 has published HRI-SA, the first publicly available multimodal dataset designed to train AI systems to detect when human operators lose situational awareness (SA) while remotely controlling robots. The dataset captures comprehensive data from 30 participants engaged in a realistic search-and-rescue simulation, including eye-tracking metrics (gaze, pupil diameter), physiological biosignals, user interface interactions, and full robot sensor data. Crucially, the experiment included predefined events requiring operator assistance, allowing researchers to establish ground truth by measuring the latency between when help was needed and when the operator responded. This creates a benchmark for developing AI that can assess human SA in real-time, a capability previously limited by disruptive or offline measurement techniques.

To demonstrate the dataset's utility, the team trained standard machine learning models to classify perceptual SA latencies. Using only generic eye-tracking features, the models achieved a recall of 88.91% and an F1-score of 67.63% in a rigorous leave-one-group-out cross-validation. Performance improved significantly to 91.51% recall and an 80.38% F1-score when these features were fused with contextual data from the robot and task environment. This proves that relatively simple, non-invasive sensors like eye-trackers can be highly effective for continuous SA assessment. The release of HRI-SA fills a critical gap in human-robot interaction (HRI) research, providing a standardized resource to develop AI 'co-pilots' that can monitor operator state and provide timely support during complex, high-stakes remote operations.

Key Points
  • First public multimodal dataset for online SA assessment in human-robot teams, with data from 30 participants in a search-and-rescue context.
  • AI models trained on the dataset detected perceptual SA gaps with 91.5% recall by fusing eye-tracking and contextual robot data.
  • Establishes a benchmark for developing real-time AI assistance systems that intervene when human operators miss critical events.

Why It Matters

Enables safer, more effective remote operations in fields like disaster response and space exploration by allowing AI to proactively support overwhelmed human operators.