Niantic's Ambitious AI Project: Building a Geospatial Navigation System with Player Data
Nov 20
3 min read
0
1
0
Niantic, the company behind the global sensation Pokémon Go, has unveiled plans for an ambitious project that leverages the data collected from its games and applications to create a groundbreaking AI model. This model, termed a "Large Geospatial Model" (LGM), aims to revolutionize how computers and robots interact with the physical world by offering spatial intelligence derived from millions of user-uploaded scans.
The foundation of this AI system lies in Niantic's Visual Positioning System (VPS), a technology the company has been developing for over five years. VPS uses a single image captured by a phone to determine its position and orientation within a 3D map. These maps are created through user contributions in games like Pokémon Go and Niantic's Universe app. The data is particularly unique because it is collected from a pedestrian perspective, often capturing areas inaccessible to cars or traditional street-view cameras.
Niantic's Chief Scientist, Victor Prisacariu, explained the depth of this data in a 2022 Q&A, stating that user-generated scans from games like Pokémon Go and Ingress were instrumental in creating high-fidelity 3D maps. These maps include not only 3D geometry, defining the shapes of objects, but also semantic understanding that identifies what the elements in the map represent, such as trees, buildings, or the sky.
The scale of this data collection is immense. Niantic has amassed scans from over 10 million locations worldwide, with approximately one million new scans uploaded weekly. These scans have trained over 50 million neural networks, each representing specific locations or viewing angles. Together, these networks compress thousands of mapping images into digital representations of physical spaces and boast an astonishing 150 trillion parameters. This vast dataset allows the model to recognize and interpret locations from unfamiliar angles.
For example, Niantic describes a scenario where a user might stand behind a church. Even if the local model has only been trained on images of the church's front entrance, the global LGM—drawing on distributed knowledge from similar locations worldwide—could identify the user's position based on shared characteristics of churches globally. This collective intelligence represents a monumental leap in AI's capability to process and understand physical spaces.
The LGM builds upon Niantic's existing Lightship Visual Positioning System, which powers features like Pokémon Go's Pokémon Playgrounds. This feature allows players to leave virtual Pokémon at specific real-world locations for others to discover. Beyond gaming, Niantic envisions its technology supporting various applications, including augmented reality (AR) products, robotics, autonomous systems, spatial planning, logistics, and remote collaboration.
However, the implications of this data usage are generating mixed reactions. While the data collection is covered under Pokémon Go's terms of service, some players were unaware that their scans would eventually fuel such advanced AI models. As highlighted by 404 Media, few players in 2016 could have anticipated this level of data utilization. A Reddit thread reacting to the news reveals a mix of opinions, with some users expressing surprise and others acknowledging they were not entirely unwitting participants. One commenter noted that players generally understood Niantic's business model wasn't solely focused on enhancing gameplay.
Despite potential concerns, Niantic's vision highlights the transformative possibilities of combining cutting-edge AI with user-generated data. The company's ability to capture nuanced, pedestrian-level perspectives offers an unprecedented worldview, positioning its LGM as a potential game-changer for AI-driven navigation, robotics, and AR technologies.
As Niantic continues to develop this innovative model, the broader societal and ethical implications of such data usage remain a developing story.