A Survey of Spatial Memory Representations for Efficient Robot Navigation
A new metric reveals some AI maps need 215x more RAM at runtime than their saved file size.
A team of researchers from the University of the Philippines and Stanford University has published a comprehensive survey analyzing the efficiency of spatial memory representations for vision-based robot navigation. The study, accepted at the Women in Computer Vision Workshop at CVPR 2026, examines 88 references spanning 52 systems from 1989 to 2025, covering methods from traditional occupancy grids to modern neural implicit representations like NICE-SLAM and 3D Gaussian Splatting (3DGS). The core problem addressed is that as robots navigate larger environments, their spatial memory grows unbounded, eventually exhausting the limited computational resources (typically 8-16GB shared memory) on embedded platforms where adding hardware isn't an option.
The researchers' key contribution is introducing the α (alpha) metric, defined as the ratio of peak runtime memory (M_peak) to the saved map size (M_map). This metric exposes a critical, often overlooked gap: the memory required to actually run a navigation AI can be orders of magnitude larger than the size of its map file on disk. Independent profiling on an NVIDIA A100 GPU revealed that α spans two orders of magnitude among neural methods alone, ranging from 2.3 for Point-SLAM to a staggering 215 for NICE-SLAM, meaning a 47MB map requires 10GB of RAM during operation.
To guide practical implementation, the survey proposes a new standardized evaluation protocol that includes memory growth rate, query latency, and memory-completeness curves—metrics current benchmarks lack. A Pareto frontier analysis shows no single paradigm dominates; for instance, 3DGS methods achieve the best accuracy on the Replica dataset with map sizes of 90-254MB, while scene graphs offer semantic abstraction at a predictable cost. Most importantly, the team provides the first independently measured α reference values and an α-aware budgeting algorithm. This tool allows robotics practitioners to assess whether a given spatial memory system is feasible for their target hardware constraints before committing to a full implementation, potentially saving significant development time and cost.
- Introduced the α (alpha) metric, revealing runtime memory can be 215x larger than saved map size (e.g., NICE-SLAM).
- Surveyed 52 systems across 88 references, providing the first independent memory benchmarks for robot navigation AI.
- Proposed a new evaluation protocol and an α-aware budgeting algorithm to assess deployment feasibility on resource-constrained hardware.
Why It Matters
Enables engineers to select robot navigation AI that will actually run on real-world, memory-constrained hardware before building it.