How Pokémon Go is giving delivery robots an inch-perfect view of the world
Pokémon Go's 30 billion crowdsourced images are now powering delivery robots that can't get lost.
Niantic Spatial, a company spun out from the AR gaming giant behind Pokémon Go, is leveraging an unprecedented dataset to solve a major robotics challenge. The firm has trained a visual positioning system on 30 billion images crowdsourced from hundreds of millions of Pokémon Go and Ingress players. This model can determine a user's location to within a few centimeters by analyzing just a handful of photos of surrounding landmarks, creating a hyper-accurate 'world model' grounded in reality.
In its first major commercial application, Niantic Spatial has partnered with Coco Robotics, which operates about 1,000 sidewalk delivery robots in cities like Los Angeles and Chicago. These robots struggle with GPS signals that can drift up to 50 meters in dense 'urban canyons' with high-rises and underpasses. By using Niantic's visual positioning, Coco's robots can navigate with the precision needed for reliable, on-time deliveries, effectively repurposing data collected for catching Pikachu to guide pizza deliveries.
- Niantic Spatial's model is trained on 30 billion images from Pokémon Go and Ingress players.
- The visual positioning system can locate a user within centimeters using a few landmark photos.
- Coco Robotics is deploying the tech across its 1,000 delivery bots to solve GPS failures in cities.
Why It Matters
This turns a massive entertainment dataset into critical infrastructure, enabling reliable autonomous delivery in dense urban areas.