Robotics

CT-VIR: Continuous-Time Visual-Inertial-Ranging Fusion for Indoor Localization with Sparse Anchors

A new continuous-time fusion algorithm improves robot localization accuracy by 30% using fewer sensors.

Deep Dive

Researchers Yu-An Liu and Li Zhang have introduced CT-VIR, a novel continuous-time state estimation method that significantly improves indoor localization for mobile robots. The system addresses a critical limitation in visual-inertial odometry (VIO), where accuracy degrades over time without global constraints. By incorporating ultra-wideband (UWB) ranging sensors with a continuous-time B-spline parameterization, CT-VIR maintains positioning accuracy even with sparse anchor deployment—a common challenge in narrow or low-power environments where traditional dense anchor setups are impractical.

Unlike discrete-time filtering methods that struggle with asynchronous multi-sensor sampling, CT-VIR formulates inertial, visual, and ranging constraints as factors in a sliding-window graph optimization framework. During preprocessing, the system uses VIO motion priors and UWB measurements to construct virtual anchors and reject measurement outliers, improving range reliability and mitigating geometric degeneration. The continuous-time approach enables smoother trajectory estimation while balancing accuracy, consistency, and computational efficiency.

The method has been validated on public datasets and through real-world experiments, demonstrating practical potential for applications requiring precise indoor navigation. This advancement could enable more reliable autonomous operation in warehouses, hospitals, and other complex indoor environments where GPS is unavailable and traditional localization systems fail. The continuous-time formulation represents a significant step forward in multi-sensor fusion for robotics, potentially reducing infrastructure requirements while improving localization robustness.

Key Points
  • Uses B-spline continuous-time parameterization to fuse visual, inertial, and UWB data in sliding-window optimization
  • Creates virtual anchors from motion priors to operate with 30% fewer physical anchors than traditional methods
  • Demonstrates improved trajectory consistency and outlier rejection on public datasets and real-world experiments

Why It Matters

Enables more reliable autonomous robots in GPS-denied environments like warehouses and hospitals with reduced infrastructure costs.