Jump to content

Spatial mapping: Difference between revisions

No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 2: Line 2:


[[File:spatial mapping2.jpg|300px|right]]
[[File:spatial mapping2.jpg|300px|right]]
'''Spatial mapping''', also known as '''3D reconstruction''' in some contexts, is a core technology that enables a device to create a three-dimensional (3D) digital model of its physical environment in real-time.<ref name="StereolabsDocsS2">{{cite web |url=https://www.stereolabs.com/docs/spatial-mapping |title=Spatial Mapping Overview |publisher=Stereolabs |access-date=2025-10-23}}</ref><ref name="ZaubarLexicon">{{cite web |url=https://about.zaubar.com/en/xr-ai-lexicon/spatial-mapping |title=Spatial Mapping |publisher=Zaubar |access-date=2025-10-23}}</ref> It is a fundamental component of [[augmented reality]] (AR), [[virtual reality]] (VR), [[mixed reality]] (MR), and [[robotics]], allowing systems to perceive, understand, and interact with the physical world.<ref name="StereolabsDocsS1">{{cite web |url=https://www.stereolabs.com/docs/spatial-mapping |title=Spatial Mapping Overview |publisher=Stereolabs |access-date=2025-10-23}}</ref><ref name="EducativeIO">{{cite web |url=https://www.educative.io/answers/spatial-mapping-and-3d-reconstruction-in-augmented-reality |title=Spatial mapping and 3D reconstruction in augmented reality |publisher=Educative |access-date=2023}}</ref> By creating a detailed digital map of surfaces, objects, and their spatial relationships, spatial mapping serves as the technological bridge between the digital and physical realms, allowing for the realistic blending of virtual and real worlds.<ref name="StereolabsDocsS1"/>
'''Spatial mapping''', also known as '''3D reconstruction''' in some contexts, is a core technology that enables a device to create a three-dimensional (3D) digital model of its physical environment in real-time.<ref name="StereolabsDocsS2">{{cite web |url=https://www.stereolabs.com/docs/spatial-mapping |title=Spatial Mapping Overview |publisher=Stereolabs |access-date=2025-10-23}}</ref><ref name="ZaubarLexicon">{{cite web |url=https://about.zaubar.com/en/xr-ai-lexicon/spatial-mapping |title=Spatial Mapping |publisher=Zaubar |access-date=2025-10-23}}</ref> It is a fundamental component of [[augmented reality]] (AR), [[virtual reality]] (VR), [[mixed reality]] (MR), and [[robotics]], allowing systems to perceive, understand, and interact with the physical world.<ref name="StereolabsDocsS1">{{cite web |url=https://www.stereolabs.com/docs/spatial-mapping |title=Spatial Mapping Overview |publisher=Stereolabs |access-date=2025-10-23}}</ref><ref name="EducativeIO"/> By creating a detailed digital map of surfaces, objects, and their spatial relationships, spatial mapping serves as the technological bridge between the digital and physical realms, allowing for the realistic blending of virtual and real worlds.<ref name="StereolabsDocsS1"/>


The process is dynamic and continuous; a device equipped for spatial mapping constantly scans its surroundings with a suite of sensors, building and refining its 3D map over time by incorporating new depth and positional data as it moves through an environment.<ref name="StereolabsDocsS2"/><ref name="UnityDocs">{{cite web |url=https://docs.unity3d.com/2019.1/Documentation/Manual/SpatialMapping.html |title=Spatial Mapping concepts |publisher=Unity |access-date=2025-10-23}}</ref> This capability is foundational to the field of [[extended reality]] (XR), enabling applications to place digital content accurately, facilitate realistic physical interactions like [[occlusion]] and collision, and provide environmental context for immersive experiences.<ref name="ZaubarLexicon"/><ref name="EducativeIO"/>
The process is dynamic and continuous; a device equipped for spatial mapping constantly scans its surroundings with a suite of sensors, building and refining its 3D map over time by incorporating new depth and positional data as it moves through an environment.<ref name="StereolabsDocsS2"/><ref name="UnityDocs">{{cite web |url=https://docs.unity3d.com/2019.1/Documentation/Manual/SpatialMapping.html |title=Spatial Mapping concepts |publisher=Unity |access-date=2025-10-23}}</ref> This capability is foundational to the field of [[extended reality]] (XR), enabling applications to place digital content accurately, facilitate realistic physical interactions like [[occlusion]] and collision, and provide environmental context for immersive experiences.<ref name="ZaubarLexicon"/><ref name="EducativeIO"/>
Line 30: Line 30:
* '''[[3D reconstruction]]''': This term often refers more broadly to the process of creating a detailed and accurate static 3D model of an object or scene from multiple images or scans. While spatial mapping is a form of 3D reconstruction, its use in the XR context emphasizes the '''real-time''' and '''continuous''' nature of the process for immediate environmental interaction, rather than offline model creation.<ref name="EducativeIO"/>
* '''[[3D reconstruction]]''': This term often refers more broadly to the process of creating a detailed and accurate static 3D model of an object or scene from multiple images or scans. While spatial mapping is a form of 3D reconstruction, its use in the XR context emphasizes the '''real-time''' and '''continuous''' nature of the process for immediate environmental interaction, rather than offline model creation.<ref name="EducativeIO"/>


* '''[[Spatial computing]]''': Spatial mapping is a core component of the broader concept of [[spatial computing]].<ref name="MeegleSpatialComp">{{cite web |url=https://www.meegle.com/en_us/topics/spatial-computing/spatial-computing-for-immersive-experiences |title=Spatial Computing for Immersive Experiences |publisher=Meegle |access-date=2025-10-23}}</ref><ref name="ArborXR"/> Spatial computing describes the entire technological framework that allows humans and machines to interact with digital information that is seamlessly integrated into a three-dimensional space.<ref name="Ocavu"/><ref name="WashingtonPostVisionPro">{{cite web |url=https://www.washingtonpost.com/technology/2025/10/22/galaxy-xr-vision-pro-m5-hands-on/ |title=Apple Vision Pro vs. Samsung Galaxy XR: A new mixed-reality showdown |publisher=The Washington Post |access-date=2025-10-23}}</ref> In this framework, spatial mapping provides the essential environmental "understanding"—the digital ground truth—that the system requires to function.<ref name="MeegleSpatialComp"/>
* '''[[Spatial computing]]''': Spatial mapping is a core component of the broader concept of [[spatial computing]].<ref name="MeegleSpatialComp">{{cite web |url=https://www.meegle.com/en_us/topics/spatial-computing/spatial-computing-for-immersive-experiences |title=Spatial Computing for Immersive Experiences |publisher=Meegle |access-date=2025-10-23}}</ref><ref name="ArborXR"/> Spatial computing describes the entire technological framework that allows humans and machines to interact with digital information that is seamlessly integrated into a three-dimensional space.<ref name="Ocavu"/><ref name="WashingtonPostVisionPro">{{cite web |url=https://www.washingtonpost.com/technology/2025/10/22/galaxy-xr-vision-pro-m5-hands-on/ |title=Apple Vision Pro vs. Samsung Galaxy XR: A new mixed-reality showdown |publisher=The Washington Post |access-date=2025-10-23}}</ref> In this framework, spatial mapping provides the essential environmental "understanding", the digital ground truth, that the system requires to function.<ref name="MeegleSpatialComp"/>


The evolution of terminology from "3D reconstruction" to "spatial mapping" and its inclusion under the umbrella of "spatial computing" reflects a significant philosophical shift in the industry. It signifies a move away from the goal of simply ''capturing'' a digital copy of the world (reconstruction) and toward the more ambitious goal of ''understanding and interacting'' with it in real-time (mapping and computing). This distinction is what separates a passive 3D scan from an active, intelligent mixed reality experience where the digital and physical worlds are deeply intertwined.<ref name="ArborXR"/><ref name="Ocavu"/>
The evolution of terminology from "3D reconstruction" to "spatial mapping" and its inclusion under the umbrella of "spatial computing" reflects a significant philosophical shift in the industry. It signifies a move away from the goal of simply ''capturing'' a digital copy of the world (reconstruction) and toward the more ambitious goal of ''understanding and interacting'' with it in real-time (mapping and computing). This distinction is what separates a passive 3D scan from an active, intelligent mixed reality experience where the digital and physical worlds are deeply intertwined.<ref name="ArborXR"/><ref name="Ocavu"/>
Line 52: Line 52:
This was followed by mobile AR frameworks: Apple's [[ARKit]] in June 2017 integrated visual-inertial odometry (VIO) for iOS devices, revolutionizing mobile AR by solving monocular Visual-Inertial Odometry without requiring depth sensors, instantly enabling 380 million devices.<ref name="AndreasJakl">{{cite web |url=https://www.andreasjakl.com/basics-of-ar-slam-simultaneous-localization-and-mapping/ |title=Basics of AR: SLAM – Simultaneous Localization and Mapping |publisher=Andreas Jakl |date=2018-08-14 |access-date=2025-10-27}}</ref> Google's [[ARCore]] in 2017 brought SLAM to Android, using similar depth-from-motion algorithms that compare images from different angles combined with IMU measurements to generate depth maps on standard hardware.<ref name="AndreasJakl"/> Meta's Oculus Quest (2019) incorporated inside-out tracking with SLAM for standalone VR/AR, eliminating external sensors.<ref name="MetaAnchorsDev">{{cite web |url=https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-overview/ |title=Spatial Anchors Overview |publisher=Meta for Developers |date=2024-05-15 |access-date=2025-10-27}}</ref>
This was followed by mobile AR frameworks: Apple's [[ARKit]] in June 2017 integrated visual-inertial odometry (VIO) for iOS devices, revolutionizing mobile AR by solving monocular Visual-Inertial Odometry without requiring depth sensors, instantly enabling 380 million devices.<ref name="AndreasJakl">{{cite web |url=https://www.andreasjakl.com/basics-of-ar-slam-simultaneous-localization-and-mapping/ |title=Basics of AR: SLAM – Simultaneous Localization and Mapping |publisher=Andreas Jakl |date=2018-08-14 |access-date=2025-10-27}}</ref> Google's [[ARCore]] in 2017 brought SLAM to Android, using similar depth-from-motion algorithms that compare images from different angles combined with IMU measurements to generate depth maps on standard hardware.<ref name="AndreasJakl"/> Meta's Oculus Quest (2019) incorporated inside-out tracking with SLAM for standalone VR/AR, eliminating external sensors.<ref name="MetaAnchorsDev">{{cite web |url=https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-overview/ |title=Spatial Anchors Overview |publisher=Meta for Developers |date=2024-05-15 |access-date=2025-10-27}}</ref>


The introduction of LiDAR to consumer devices began with iPad Pro in March 2020 and iPhone 12 Pro in October 2020, using Vertical Cavity Surface Emitting Laser technology with direct Time-of-Flight measurement. This enabled ARKit 3.5's Scene Geometry API for instant AR with triangle mesh classification into semantic categories.<ref name="AppleDeveloper">{{cite web |url=https://developer.apple.com/documentation/arkit/arkit_scene_reconstruction |title=ARKit Scene Reconstruction |publisher=Apple Developer Documentation |date=2020 |access-date=2025-10-27}}</ref> The 2020s have seen refinements, such as HoloLens 2's Scene Understanding SDK (2019), which builds on spatial mapping for semantic environmental analysis.<ref name="MicrosoftDoc"/> Advancements in LiDAR (e.g., iPhone 12 Pro, 2020) and AI-driven feature detection have further democratized high-fidelity mapping.<ref name="AndreasJakl"/>
The introduction of LiDAR to consumer devices began with iPad Pro in March 2020 and iPhone 12 Pro in October 2020, using Vertical Cavity Surface Emitting Laser technology with direct Time-of-Flight measurement. This enabled ARKit 3.5's Scene Geometry API for instant AR with triangle mesh classification into semantic categories.<ref name="AppleDeveloper">{{cite web |url=https://developer.apple.com/documentation/arkit/arkit_scene_reconstruction |title=ARKit Scene Reconstruction |publisher=Apple Developer Documentation |date=2020 |access-date=2025-10-27}}</ref> The 2020s have seen refinements, such as HoloLens 2's Scene Understanding SDK (2019), which builds on spatial mapping for semantic environmental analysis.<ref name="MicrosoftDoc"/> Advancements in LiDAR (for example iPhone 12 Pro, 2020) and AI-driven feature detection have further democratized high-fidelity mapping.<ref name="AndreasJakl"/>


Microsoft launched HoloLens 2 in 2019 with improved Azure Kinect sensors, and Meta Quest 3 arrived in 2023 with full-color passthrough, depth sensing via IR patterned light projector, and sophisticated Scene API with semantic labeling. Apple Vision Pro launched in 2024, representing the current state-of-the-art in spatial computing with advanced eye tracking and hand tracking. Today, spatial mapping is integral to spatial computing, with ongoing research in collaborative SLAM for multi-user experiences.<ref name="WikipediaSLAM"/>
Microsoft launched HoloLens 2 in 2019 with improved Azure Kinect sensors, and Meta Quest 3 arrived in 2023 with full-color passthrough, depth sensing via IR patterned light projector, and sophisticated Scene API with semantic labeling. Apple Vision Pro launched in 2024, representing the current state-of-the-art in spatial computing with advanced eye tracking and hand tracking. Today, spatial mapping is integral to spatial computing, with ongoing research in collaborative SLAM for multi-user experiences.<ref name="WikipediaSLAM"/>
Line 98: Line 98:
| '''[[Mapping Range]]''' || Controls the maximum distance from the sensor at which depth data is incorporated into the map. || 2 m – 20 m <ref name="StereolabsDocsS2"/> || High (longer range = more data to process = higher resource usage) || Moderate (longer range can map large areas faster but may reduce accuracy at the farthest points)
| '''[[Mapping Range]]''' || Controls the maximum distance from the sensor at which depth data is incorporated into the map. || 2 m – 20 m <ref name="StereolabsDocsS2"/> || High (longer range = more data to process = higher resource usage) || Moderate (longer range can map large areas faster but may reduce accuracy at the farthest points)
|-
|-
| '''[[Mesh Filtering]]''' || Post-processing to reduce polygon count (decimation) and clean mesh artifacts (e.g., fill holes). || Presets (e.g., Low, Medium, High) <ref name="StereolabsDocsS2"/> || Low (reduces polygon count, leading to significant performance improvement in rendering) || Moderate (aggressive filtering can lead to loss of fine geometric detail)
| '''[[Mesh Filtering]]''' || Post-processing to reduce polygon count (decimation) and clean mesh artifacts (for example fill holes). || Presets (for example Low, Medium, High) <ref name="StereolabsDocsS2"/> || Low (reduces polygon count, leading to significant performance improvement in rendering) || Moderate (aggressive filtering can lead to loss of fine geometric detail)
|-
|-
| '''[[Mesh Texturing]]''' || The process of applying camera images to the mesh surface to create a photorealistic model. || On / Off <ref name="StereolabsDocsS2"/> || High (requires storing and processing images, creating a texture map, and using more complex shaders for rendering) || High (dramatically increases visual realism)
| '''[[Mesh Texturing]]''' || The process of applying camera images to the mesh surface to create a photorealistic model. || On / Off <ref name="StereolabsDocsS2"/> || High (requires storing and processing images, creating a texture map, and using more complex shaders for rendering) || High (dramatically increases visual realism)
Line 111: Line 111:
=== Essential Sensor Technologies ===
=== Essential Sensor Technologies ===


Modern XR devices rely on [[sensor fusion]], the process of combining data from multiple sensors to achieve a result that is more accurate and robust than could be achieved by any single sensor alone.<ref name="SLAMSystems">{{cite web |url=https://www.sbg-systems.com/glossary/slam-simultaneous-localization-and-mapping/ |title=SLAM - Simultaneous localization and mapping |publisher=SBG Systems |access-date=2025-10-23}}</ref><ref name="MilvusSensors">{{cite web |url=https://milvus.io/ai-quick-reference/what-sensors-eg-accelerometer-gyroscope-are-essential-in-ar-devices |title=What sensors (e.g., accelerometer, gyroscope) are essential in AR devices? |publisher=Milvus |access-date=2025-10-23}}</ref> The essential sensor suite includes:
Modern XR devices rely on [[sensor fusion]], the process of combining data from multiple sensors to achieve a result that is more accurate and robust than could be achieved by any single sensor alone.<ref name="SLAMSystems">{{cite web |url=https://www.sbg-systems.com/glossary/slam-simultaneous-localization-and-mapping/ |title=SLAM - Simultaneous localization and mapping |publisher=SBG Systems |access-date=2025-10-23}}</ref><ref name="MilvusSensors">{{cite web |url=https://milvus.io/ai-quick-reference/what-sensors-eg-accelerometer-gyroscope-are-essential-in-ar-devices |title=What sensors (for example accelerometer, gyroscope) are essential in AR devices? |publisher=Milvus |access-date=2025-10-23}}</ref> The essential sensor suite includes:


==== Depth Cameras ====
==== Depth Cameras ====
Line 153: Line 153:
* '''[[Visual SLAM]] (vSLAM)''': Uses one or more cameras to track visual features.<ref name="MathWorksSLAM"/>
* '''[[Visual SLAM]] (vSLAM)''': Uses one or more cameras to track visual features.<ref name="MathWorksSLAM"/>
* '''[[LiDAR SLAM]]''': Uses a LiDAR sensor to build a precise geometric map.<ref name="MathWorksSLAM"/>
* '''[[LiDAR SLAM]]''': Uses a LiDAR sensor to build a precise geometric map.<ref name="MathWorksSLAM"/>
* '''[[Multi-Sensor SLAM]]''': Fuses data from various sources (e.g., cameras, IMU, LiDAR) for enhanced robustness and accuracy.<ref name="MathWorksSLAM"/>
* '''[[Multi-Sensor SLAM]]''': Fuses data from various sources (for example cameras, IMU, LiDAR) for enhanced robustness and accuracy.<ref name="MathWorksSLAM"/>


Spatial mapping is typically accomplished via SLAM algorithms, which build a map of the environment in real time while tracking the device's position within it.<ref name="Adeia">{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |date=2022-03-02 |access-date=2025-10-27}}</ref>
Spatial mapping is typically accomplished via SLAM algorithms, which build a map of the environment in real time while tracking the device's position within it.<ref name="Adeia">{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |date=2022-03-02 |access-date=2025-10-27}}</ref>
Line 177: Line 177:
This is the primary domain where spatial mapping enables the defining features of the experience:
This is the primary domain where spatial mapping enables the defining features of the experience:


* '''Object Placement and Interaction''': Allows virtual objects to be placed on real surfaces, such as a virtual chessboard on a physical table or a digital painting on a wall.<ref name="EducativeIO"/><ref name="MetaHelp"/> Constraining holograms or other virtual items to lie on real surfaces makes interactions more intuitive—for example, a digital 3D model can sit on top of a physical desk without "floating" in mid-air. This helps maintain correct scale and position, and reduces user effort in positioning objects in 3D space.<ref name="MicrosoftDoc"/>
* '''Object Placement and Interaction''': Allows virtual objects to be placed on real surfaces, such as a virtual chessboard on a physical table or a digital painting on a wall.<ref name="EducativeIO"/><ref name="MetaHelp"/> Constraining holograms or other virtual items to lie on real surfaces makes interactions more intuitive, for example, a digital 3D model can sit on top of a physical desk without "floating" in mid-air. This helps maintain correct scale and position, and reduces user effort in positioning objects in 3D space.<ref name="MicrosoftDoc"/>


* '''Occlusion''': Creates a sense of depth and realism by allowing real-world objects to block the view of virtual objects. For example, a virtual character can realistically walk behind a physical couch.<ref name="EducativeIO"/><ref name="MetaHelp"/> A mapped 3D mesh of the environment lets the renderer determine when parts of a virtual object should be hidden because a real object is in front of them from the user's viewpoint. Proper occlusion cues greatly increase realism, as virtual characters or objects can appear to move behind real walls or furniture and out of sight, or emerge from behind real obstacles.<ref name="MicrosoftDoc"/>
* '''Occlusion''': Creates a sense of depth and realism by allowing real-world objects to block the view of virtual objects. For example, a virtual character can realistically walk behind a physical couch.<ref name="EducativeIO"/><ref name="MetaHelp"/> A mapped 3D mesh of the environment lets the renderer determine when parts of a virtual object should be hidden because a real object is in front of them from the user's viewpoint. Proper occlusion cues greatly increase realism, as virtual characters or objects can appear to move behind real walls or furniture and out of sight, or emerge from behind real obstacles.<ref name="MicrosoftDoc"/>
Line 210: Line 210:
* '''Training and Education''': It enables the creation of highly realistic simulations for training on complex or dangerous tasks, such as medical procedures, equipment maintenance, or emergency response, without risk to personnel or equipment.<ref name="EducativeIO"/><ref name="ArborXR"/> Case Western Reserve University reported 85% of medical students rated MR anatomy training as "equivalent" or "better" than in-person classes, while Ford Motor Company achieved 50-70% reduction in training time from 6 months to several weeks using VR and AR technologies.<ref name="ford">{{cite web |url=https://www.ford.com/technology/virtual-reality/ |title=Ford Virtual Reality Training |publisher=Ford Motor Company |access-date=2025-10-27}}</ref>
* '''Training and Education''': It enables the creation of highly realistic simulations for training on complex or dangerous tasks, such as medical procedures, equipment maintenance, or emergency response, without risk to personnel or equipment.<ref name="EducativeIO"/><ref name="ArborXR"/> Case Western Reserve University reported 85% of medical students rated MR anatomy training as "equivalent" or "better" than in-person classes, while Ford Motor Company achieved 50-70% reduction in training time from 6 months to several weeks using VR and AR technologies.<ref name="ford">{{cite web |url=https://www.ford.com/technology/virtual-reality/ |title=Ford Virtual Reality Training |publisher=Ford Motor Company |access-date=2025-10-27}}</ref>


* '''Manufacturing and Assembly''': Boeing implemented AR-guided assembly using Microsoft HoloLens 2 and Google Glass Enterprise across 15 global facilities for wire harness assembly, achieving 88% first-pass accuracy, 33% increase in wiring speed, 25% reduction in production time, and nearly zero error rates—saving millions per jet through reduced rework.<ref name="boeing">{{cite web |url=https://www.boeing.com/features/innovation-quarterly/aug2018/feature-technical-augmented-reality.page |title=Boeing Augmented Reality |publisher=Boeing |access-date=2025-10-27}}</ref>
* '''Manufacturing and Assembly''': Boeing implemented AR-guided assembly using Microsoft HoloLens 2 and Google Glass Enterprise across 15 global facilities for wire harness assembly, achieving 88% first-pass accuracy, 33% increase in wiring speed, 25% reduction in production time, and nearly zero error rates, saving millions per jet through reduced rework.<ref name="boeing">{{cite web |url=https://www.boeing.com/features/innovation-quarterly/aug2018/feature-technical-augmented-reality.page |title=Boeing Augmented Reality |publisher=Boeing |access-date=2025-10-27}}</ref>


* '''Logistics and Supply Chain''': Companies use spatial mapping to optimize warehouse layouts, plan efficient delivery routes by analyzing traffic and terrain, and track assets in real-time within a facility.<ref name="eSpatial">{{cite web |url=https://www.espatial.com/blog/spatial-mapping |title=What Is Spatial Mapping and How Is It Used? |publisher=eSpatial |access-date=2025-10-23}}</ref><ref name="Milsoft">{{cite web |url=https://www.milsoft.com/newsroom/gis-applications-different-industries/ |title=GIS Applications in Different Industries |publisher=Milsoft |access-date=2025-10-23}}</ref>
* '''Logistics and Supply Chain''': Companies use spatial mapping to optimize warehouse layouts, plan efficient delivery routes by analyzing traffic and terrain, and track assets in real-time within a facility.<ref name="eSpatial">{{cite web |url=https://www.espatial.com/blog/spatial-mapping |title=What Is Spatial Mapping and How Is It Used? |publisher=eSpatial |access-date=2025-10-23}}</ref><ref name="Milsoft">{{cite web |url=https://www.milsoft.com/newsroom/gis-applications-different-industries/ |title=GIS Applications in Different Industries |publisher=Milsoft |access-date=2025-10-23}}</ref>
Line 237: Line 237:
The principles of spatial mapping extend to a planetary scale through [[geospatial mapping]]. Instead of headset sensors, this field uses data from satellites, aircraft, drones, and ground-based sensors to create comprehensive 3D maps of the Earth.<ref name="Matrack">{{cite web |url=https://matrackinc.com/geospatial-mapping/ |title=What is Geospatial Mapping and How does it Work? |publisher=Matrack Inc. |access-date=2025-10-23}}</ref><ref name="Spyrosoft">{{cite web |url=https://spyro-soft.com/blog/geospatial/what-is-geospatial-mapping-and-how-does-it-work |title=What is Geospatial Mapping and How Does It Work? |publisher=Spyrosoft |access-date=2025-10-23}}</ref>
The principles of spatial mapping extend to a planetary scale through [[geospatial mapping]]. Instead of headset sensors, this field uses data from satellites, aircraft, drones, and ground-based sensors to create comprehensive 3D maps of the Earth.<ref name="Matrack">{{cite web |url=https://matrackinc.com/geospatial-mapping/ |title=What is Geospatial Mapping and How does it Work? |publisher=Matrack Inc. |access-date=2025-10-23}}</ref><ref name="Spyrosoft">{{cite web |url=https://spyro-soft.com/blog/geospatial/what-is-geospatial-mapping-and-how-does-it-work |title=What is Geospatial Mapping and How Does It Work? |publisher=Spyrosoft |access-date=2025-10-23}}</ref>


* This large-scale mapping is critical for urban planning, precision agriculture, environmental monitoring (e.g., tracking deforestation or glacial retreat), and disaster management.<ref name="Matrack"/><ref name="Faro">{{cite web |url=https://www.faro.com/en/Resource-Library/Article/Past-Present-and-Future-of-Geospatial-Mapping |title=The Past, Present and Future of Geospatial Mapping |publisher=FARO |access-date=2025-10-23}}</ref><ref name="SurveyTransfer">{{cite web |url=https://surveytransfer.net/geospatial-applications/ |title=10 Key Industries Using Geospatial Applications |publisher=SurveyTransfer |access-date=2025-10-23}}</ref>
* This large-scale mapping is critical for urban planning, precision agriculture, environmental monitoring (for example tracking deforestation or glacial retreat), and disaster management.<ref name="Matrack"/><ref name="Faro">{{cite web |url=https://www.faro.com/en/Resource-Library/Article/Past-Present-and-Future-of-Geospatial-Mapping |title=The Past, Present and Future of Geospatial Mapping |publisher=FARO |access-date=2025-10-23}}</ref><ref name="SurveyTransfer">{{cite web |url=https://surveytransfer.net/geospatial-applications/ |title=10 Key Industries Using Geospatial Applications |publisher=SurveyTransfer |access-date=2025-10-23}}</ref>
* Projects like Google's AlphaEarth Foundations fuse vast quantities of satellite imagery, radar, and 3D laser mapping data into a unified digital representation of the planet, allowing scientists to track global changes with remarkable precision.<ref name="AlphaEarth">{{cite web |url=https://deepmind.google/discover/blog/alphaearth-foundations-helps-map-our-planet-in-unprecedented-detail/ |title=AlphaEarth Foundations helps map our planet in unprecedented detail |publisher=Google DeepMind |access-date=2025-10-23}}</ref>
* Projects like Google's AlphaEarth Foundations fuse vast quantities of satellite imagery, radar, and 3D laser mapping data into a unified digital representation of the planet, allowing scientists to track global changes with remarkable precision.<ref name="AlphaEarth">{{cite web |url=https://deepmind.google/discover/blog/alphaearth-foundations-helps-map-our-planet-in-unprecedented-detail/ |title=AlphaEarth Foundations helps map our planet in unprecedented detail |publisher=Google DeepMind |access-date=2025-10-23}}</ref>
* Pokemon Go achieved unprecedented scale with 800+ million downloads and 600+ million active users, using Visual Positioning System with centimeter-level accuracy. Niantic built a Large Geospatial Model with over 50 million neural networks trained on location data comprising 150+ trillion parameters for planet-scale 3D mapping from pedestrian perspective.<ref name="niantic">{{cite web |url=https://nianticlabs.com/news/largegeospatialmodel |title=Large Geospatial Model |publisher=Niantic Labs |access-date=2025-10-27}}</ref>
* Pokemon Go achieved unprecedented scale with 800+ million downloads and 600+ million active users, using Visual Positioning System with centimeter-level accuracy. Niantic built a Large Geospatial Model with over 50 million neural networks trained on location data comprising 150+ trillion parameters for planet-scale 3D mapping from pedestrian perspective.<ref name="niantic">{{cite web |url=https://nianticlabs.com/news/largegeospatialmodel |title=Large Geospatial Model |publisher=Niantic Labs |access-date=2025-10-27}}</ref>


The application of spatial mapping across these different orders of magnitude—from centimeters for object scanning, to meters for room-scale XR, to kilometers for urban planning—reveals a unified theoretical foundation.<ref name="AlphaEarth"/><ref name="Matrack"/>
The application of spatial mapping across these different orders of magnitude, from centimeters for object scanning, to meters for room-scale XR, to kilometers for urban planning, reveals a unified theoretical foundation.<ref name="AlphaEarth"/><ref name="Matrack"/>


== Implementation in Major XR Platforms ==
== Implementation in Major XR Platforms ==
Line 271: Line 271:
Academic research comparing HoloLens meshes to Terrestrial Laser Scanner ground truth shows centimeter-level accuracy, with studies reporting mean Hausdorff distance of a few centimeters and 5.42 cm root mean squared distance between mesh vertices and corresponding model planes. However, research indicates systematic 3-4% overestimation of actual distances and larger deviations near ceilings, transition spaces between rooms, and weakly-textured surfaces.<ref name="springer">{{cite web |url=https://link.springer.com/article/10.1007/s41064-021-00165-8 |title=Evaluation of Microsoft HoloLens 2 as a Tool for Indoor Spatial Mapping |publisher=Springer |access-date=2025-10-27}}</ref>
Academic research comparing HoloLens meshes to Terrestrial Laser Scanner ground truth shows centimeter-level accuracy, with studies reporting mean Hausdorff distance of a few centimeters and 5.42 cm root mean squared distance between mesh vertices and corresponding model planes. However, research indicates systematic 3-4% overestimation of actual distances and larger deviations near ceilings, transition spaces between rooms, and weakly-textured surfaces.<ref name="springer">{{cite web |url=https://link.springer.com/article/10.1007/s41064-021-00165-8 |title=Evaluation of Microsoft HoloLens 2 as a Tool for Indoor Spatial Mapping |publisher=Springer |access-date=2025-10-27}}</ref>


The Spatial Mapping API provides low-level access with key types including SurfaceObserver, SurfaceChange, SurfaceData, and SurfaceId. Developers specify regions of space as spheres, axis-aligned boxes, oriented boxes, or frustums. The system generates meshes stored in an 8cm cubed voxel grid, with configurable triangles per cubic meter controlling detail level—2000 triangles per cubic meter recommended for balanced performance.<ref name="MSDocsUnity"/>
The Spatial Mapping API provides low-level access with key types including SurfaceObserver, SurfaceChange, SurfaceData, and SurfaceId. Developers specify regions of space as spheres, axis-aligned boxes, oriented boxes, or frustums. The system generates meshes stored in an 8cm cubed voxel grid, with configurable triangles per cubic meter controlling detail level, 2000 triangles per cubic meter recommended for balanced performance.<ref name="MSDocsUnity"/>


To provide developers with higher-level environmental context, Microsoft offers the [[Mixed Reality Toolkit]] (MRTK). The MRTK includes a '''Spatial Understanding''' module (and its successor, '''Scene Understanding''') that analyzes the raw geometric mesh to identify and label key architectural elements like floors, walls, and ceilings. It can also identify suitable locations for placing holographic content based on constraints, such as finding an empty space on a wall or a flat surface on a desk.<ref name="MSDocsUnity"/> The HoloLens mapping system is optimized for indoor environments and performs best in well-lit spaces, as it can struggle with dark, highly reflective, or transparent surfaces.<ref name="HoloLensSpaces"/>
To provide developers with higher-level environmental context, Microsoft offers the [[Mixed Reality Toolkit]] (MRTK). The MRTK includes a '''Spatial Understanding''' module (and its successor, '''Scene Understanding''') that analyzes the raw geometric mesh to identify and label key architectural elements like floors, walls, and ceilings. It can also identify suitable locations for placing holographic content based on constraints, such as finding an empty space on a wall or a flat surface on a desk.<ref name="MSDocsUnity"/> The HoloLens mapping system is optimized for indoor environments and performs best in well-lit spaces, as it can struggle with dark, highly reflective, or transparent surfaces.<ref name="HoloLensSpaces"/>
Line 337: Line 337:
* '''Problematic Surfaces''': Onboard sensors often struggle with certain types of materials. Transparent surfaces like glass, highly reflective surfaces like mirrors, and textureless or dark, light-absorbing surfaces can fail to return usable data to depth sensors, resulting in gaps or inaccuracies in the map.<ref name="UnityDocs"/><ref name="HoloLensSpaces"/><ref name="MagicLeapMappingDocs">{{cite web |url=https://developer-docs.magicleap.cloud/docs/guides/features/spatial-mapping/ |title=Real-time World Sensing |publisher=Magic Leap |access-date=2025-10-23}}</ref>
* '''Problematic Surfaces''': Onboard sensors often struggle with certain types of materials. Transparent surfaces like glass, highly reflective surfaces like mirrors, and textureless or dark, light-absorbing surfaces can fail to return usable data to depth sensors, resulting in gaps or inaccuracies in the map.<ref name="UnityDocs"/><ref name="HoloLensSpaces"/><ref name="MagicLeapMappingDocs">{{cite web |url=https://developer-docs.magicleap.cloud/docs/guides/features/spatial-mapping/ |title=Real-time World Sensing |publisher=Magic Leap |access-date=2025-10-23}}</ref>


* '''Drift''': Tracking systems that rely on [[odometry]] (estimating motion from sensor data) are susceptible to small, accumulating errors over time. This phenomenon, known as '''drift''', can cause the digital map to become misaligned with the real world. While algorithms use techniques like [[loop closure]] to correct for drift, it can still be a significant problem in large, feature-poor environments (like a long, white hallway).<ref name="MilvusSLAM"/><ref name="SLAMSystems"/>
* '''[[Drift]]''': Tracking systems that rely on [[odometry]] (estimating motion from sensor data) are susceptible to small, accumulating errors over time. This phenomenon, known as '''drift''', can cause the digital map to become misaligned with the real world. While algorithms use techniques like [[loop closure]] to correct for drift, it can still be a significant problem in large, feature-poor environments (like a long, white hallway).<ref name="MilvusSLAM"/><ref name="SLAMSystems"/>


* '''Scale and Boundaries''': The way spatial data is aggregated and defined can influence analytical results, a concept known in geography as the [[Modifiable Areal Unit Problem]] (MAUP). This problem highlights that statistical outcomes can change based on the shape and scale of the zones used for analysis, which has parallels in how room-scale maps are chunked and interpreted.<ref name="MAUP1">{{cite web |url=https://pmc.ncbi.nlm.nih.gov/articles/PMC7254930/ |title=The modifiable areal unit problem in ecological community data |publisher=PLOS ONE |access-date=2025-10-23}}</ref><ref name="MAUP2">{{cite web |url=https://zenn-wong.medium.com/the-challenges-of-using-maps-in-policy-making-510e3fcb8eb3 |title=The Challenges of Using Maps in Policy-Making |publisher=Medium |access-date=2025-10-23}}</ref>
* '''Scale and Boundaries''': The way spatial data is aggregated and defined can influence analytical results, a concept known in geography as the [[Modifiable Areal Unit Problem]] (MAUP). This problem highlights that statistical outcomes can change based on the shape and scale of the zones used for analysis, which has parallels in how room-scale maps are chunked and interpreted.<ref name="MAUP1">{{cite web |url=https://pmc.ncbi.nlm.nih.gov/articles/PMC7254930/ |title=The modifiable areal unit problem in ecological community data |publisher=PLOS ONE |access-date=2025-10-23}}</ref><ref name="MAUP2">{{cite web |url=https://zenn-wong.medium.com/the-challenges-of-using-maps-in-policy-making-510e3fcb8eb3 |title=The Challenges of Using Maps in Policy-Making |publisher=Medium |access-date=2025-10-23}}</ref>
Line 351: Line 351:
By its very nature, spatial mapping captures a detailed 3D blueprint of a user's private spaces, including room layouts, furniture, and potentially sensitive personal items.<ref name="MeegleSpatialComp"/><ref name="MilvusSensors"/> This raises significant privacy and security concerns about how this intimate data is stored, processed, and potentially shared with third-party applications or cloud services.<ref name="MeegleSpatialComp"/>  
By its very nature, spatial mapping captures a detailed 3D blueprint of a user's private spaces, including room layouts, furniture, and potentially sensitive personal items.<ref name="MeegleSpatialComp"/><ref name="MilvusSensors"/> This raises significant privacy and security concerns about how this intimate data is stored, processed, and potentially shared with third-party applications or cloud services.<ref name="MeegleSpatialComp"/>  


In response, platform providers like Meta and Magic Leap have implemented explicit permission systems, requiring users to grant individual applications access to their spatial data before it can be used.<ref name="MetaHelp"/><ref name="MagicLeapLegal"/> Device makers emphasize that spatial maps are generally processed locally on the device (often not transmitted off-device) and that they do not encode high-resolution color imagery of the scene, only abstract geometry. Spatial maps typically do not include identifiable details like text on documents or people's faces—moving objects and people are usually omitted during scanning.<ref name="MagicLeapLegal"/>
In response, platform providers like Meta and Magic Leap have implemented explicit permission systems, requiring users to grant individual applications access to their spatial data before it can be used.<ref name="MetaHelp"/><ref name="MagicLeapLegal"/> Device makers emphasize that spatial maps are generally processed locally on the device (often not transmitted off-device) and that they do not encode high-resolution color imagery of the scene, only abstract geometry. Spatial maps typically do not include identifiable details like text on documents or people's faces, moving objects and people are usually omitted during scanning.<ref name="MagicLeapLegal"/>


These technical challenges highlight an important reality for XR development: the digital map will always be an imperfect approximation of the physical world. The pursuit of a flawless, error-free spatial map is asymptotic. Therefore, successful XR application design depends not only on improvements in mapping technology but also on developing software that is resilient to these imperfections.
These technical challenges highlight an important reality for XR development: the digital map will always be an imperfect approximation of the physical world. The pursuit of a flawless, error-free spatial map is asymptotic. Therefore, successful XR application design depends not only on improvements in mapping technology but also on developing software that is resilient to these imperfections.
Line 361: Line 361:
=== Semantic Spatial Understanding ===
=== Semantic Spatial Understanding ===


The next major frontier for spatial mapping is the shift from purely geometric understanding (knowing ''where'' a surface is) to '''semantic understanding''' (knowing ''what'' a surface is).<ref name="SpatialAI"/><ref name="FutureDirections1">{{cite web |url=https://arxiv.org/html/2508.20477v1 |title=What is Spatial Computing? A Survey on the Foundations and State-of-the-Art |publisher=arXiv |access-date=2025-10-23}}</ref> This involves leveraging [[AI]] and [[machine learning]] algorithms to analyze the map data and automatically identify, classify, and label objects and architectural elements in real-time—for example, recognizing a surface as a "couch," an opening as a "door," or an object as a "chair."<ref name="MetaHelp"/><ref name="SpatialAI"/>
The next major frontier for spatial mapping is the shift from purely geometric understanding (knowing ''where'' a surface is) to '''[[semantic understanding]]''' (knowing ''what'' a surface is).<ref name="SpatialAI"/><ref name="FutureDirections1">{{cite web |url=https://arxiv.org/html/2508.20477v1 |title=What is Spatial Computing? A Survey on the Foundations and State-of-the-Art |publisher=arXiv |access-date=2025-10-23}}</ref> This involves leveraging [[AI]] and [[machine learning]] algorithms to analyze the map data and automatically identify, classify, and label objects and architectural elements in real-time, for example, recognizing a surface as a "couch," an opening as a "door," or an object as a "chair."<ref name="MetaHelp"/><ref name="SpatialAI"/>


This capability, already emerging in platforms like Meta Quest's Scene API, will enable a new generation of intelligent and context-aware XR experiences. Virtual characters could realistically interact with the environment (e.g., sitting on a recognized couch), applications could automatically adapt their UI to the user's specific room layout, and digital assistants could understand commands related to physical objects ("place the virtual screen on that wall").<ref name="FutureDirections1"/>
This capability, already emerging in platforms like Meta Quest's Scene API, will enable a new generation of intelligent and context-aware XR experiences. Virtual characters could realistically interact with the environment (for example sitting on a recognized couch), applications could automatically adapt their UI to the user's specific room layout, and digital assistants could understand commands related to physical objects ("place the virtual screen on that wall").<ref name="FutureDirections1"/>


=== Neural Rendering and AI-Powered Mapping ===
=== Neural Rendering and AI-Powered Mapping ===


Neural Radiance Fields (NeRF) revolutionized 3D scene representation when introduced by UC Berkeley researchers in March 2020, representing continuous volumetric scene function producing photorealistic novel views through neural network. Key variants address limitations: Instant-NGP (2022) reduces training from hours to seconds through multi-resolution hash encoding, while Mip-NeRF (2021) adds anti-aliasing for better rendering at multiple scales.<ref name="nerf">{{cite web |url=https://www.matthewtancik.com/nerf |title=NeRF: Neural Radiance Fields |publisher=UC Berkeley |access-date=2025-10-27}}</ref>
[[Neural Radiance Fields]] (NeRF) revolutionized 3D scene representation when introduced by UC Berkeley researchers in March 2020, representing continuous volumetric scene function producing photorealistic novel views through neural network. Key variants address limitations: Instant-NGP (2022) reduces training from hours to seconds through multi-resolution hash encoding, while Mip-NeRF (2021) adds anti-aliasing for better rendering at multiple scales.<ref name="nerf">{{cite web |url=https://www.matthewtancik.com/nerf |title=NeRF: Neural Radiance Fields |publisher=UC Berkeley |access-date=2025-10-27}}</ref>


3D Gaussian Splatting emerged in August 2023 as breakthrough achieving real-time performance at 30+ fps for 1080p rendering—100 to 1000 times faster than NeRF. The technique represents scenes using millions of 3D Gaussians in explicit representation versus NeRF's implicit neural encoding, enabling real-time rendering crucial for interactive AR/VR applications.<ref name="gaussian">{{cite web |url=https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/ |title=3D Gaussian Splatting for Real-Time Radiance Field Rendering |publisher=INRIA |access-date=2025-10-27}}</ref>
3D Gaussian Splatting emerged in August 2023 as breakthrough achieving real-time performance at 30+ fps for 1080p rendering, 100 to 1000 times faster than NeRF. The technique represents scenes using millions of 3D Gaussians in explicit representation versus NeRF's implicit neural encoding, enabling real-time rendering crucial for interactive AR/VR applications.<ref name="gaussian">{{cite web |url=https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/ |title=3D Gaussian Splatting for Real-Time Radiance Field Rendering |publisher=INRIA |access-date=2025-10-27}}</ref>


=== The Role of Edge Computing and the Cloud ===
=== The Role of Edge Computing and the Cloud ===


To overcome the processing and power limitations of mobile XR devices, computationally intensive spatial mapping tasks will increasingly be offloaded to the network edge or the cloud.<ref name="AdeiaBlog">{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |access-date=2025-10-23}}</ref> In this '''split-compute''' model, a lightweight headset would be responsible for capturing raw sensor data and sending it to a powerful nearby edge server. The server would then perform the heavy lifting—running SLAM algorithms, generating the mesh, and performing semantic analysis—and stream the resulting map data back to the device with extremely low latency.<ref name="AdeiaBlog"/>
To overcome the processing and power limitations of mobile XR devices, computationally intensive spatial mapping tasks will increasingly be offloaded to the network edge or the cloud.<ref name="AdeiaBlog">{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |access-date=2025-10-23}}</ref> In this '''[[split-compute]]''' model, a lightweight headset would be responsible for capturing raw sensor data and sending it to a powerful nearby edge server. The server would then perform the heavy lifting, running SLAM algorithms, generating the mesh, and performing semantic analysis, and stream the resulting map data back to the device with extremely low latency.<ref name="AdeiaBlog"/>


Furthermore, the cloud will play a crucial role in creating and hosting large-scale, persistent spatial maps, often referred to as '''[[digital twin]]s''' or the '''AR Cloud'''. By aggregating and merging map data from many users, it will be possible to build and maintain a shared, persistent digital replica of real-world locations, enabling multi-user experiences at an unprecedented scale.<ref name="MagicLeapLegal"/><ref name="AdeiaBlog"/>
Furthermore, the cloud will play a crucial role in creating and hosting large-scale, persistent spatial maps, often referred to as '''[[digital twin]]s''' or the '''[[AR Cloud]]'''. By aggregating and merging map data from many users, it will be possible to build and maintain a shared, persistent digital replica of real-world locations, enabling multi-user experiences at an unprecedented scale.<ref name="MagicLeapLegal"/><ref name="AdeiaBlog"/>


=== Standardization and Interoperability ===
=== Standardization and Interoperability ===
Line 381: Line 381:
The current spatial mapping landscape is fragmented, with each major platform (Meta, Apple, Microsoft, etc.) using its own proprietary data formats and APIs.<ref name="AdeiaBlog"/> This lack of interoperability is a significant barrier to creating a unified [[metaverse]] or a truly open AR ecosystem where experiences can be shared seamlessly across different devices.
The current spatial mapping landscape is fragmented, with each major platform (Meta, Apple, Microsoft, etc.) using its own proprietary data formats and APIs.<ref name="AdeiaBlog"/> This lack of interoperability is a significant barrier to creating a unified [[metaverse]] or a truly open AR ecosystem where experiences can be shared seamlessly across different devices.


For the field to mature, industry-wide standards for spatial mapping data will be necessary. Initiatives like [[OpenXR]] provide a crucial first step by standardizing the API for device interaction, but future standards will need to address the format and exchange of the spatial map data itself—including point clouds, meshes, and semantic labels. This will be essential to ensure that a map created by one device can be understood and used by another, fostering a more collaborative and interconnected spatial web.<ref name="AdeiaBlog"/>
For the field to mature, industry-wide standards for spatial mapping data will be necessary. Initiatives like [[OpenXR]] provide a crucial first step by standardizing the API for device interaction, but future standards will need to address the format and exchange of the spatial map data itself, including point clouds, meshes, and semantic labels. This will be essential to ensure that a map created by one device can be understood and used by another, fostering a more collaborative and interconnected spatial web.<ref name="AdeiaBlog"/>


Many AR SDKs provide APIs to access spatial mapping data. For instance, Apple's [[ARKit]] can generate a mesh of the environment on devices with a LiDAR Scanner (exposing it through `ARMeshAnchor` objects), and Google's [[ARCore]] provides a Depth API that yields per-pixel depth maps which can be converted into spatial meshes. These frameworks use the device's camera(s) and sensors to detect real-world surfaces so developers can place virtual content convincingly in the scene.<ref name="AppleDeveloper"/><ref name="arcore"/>
Many AR SDKs provide APIs to access spatial mapping data. For instance, Apple's [[ARKit]] can generate a mesh of the environment on devices with a LiDAR Scanner (exposing it through `ARMeshAnchor` objects), and Google's [[ARCore]] provides a Depth API that yields per-pixel depth maps which can be converted into spatial meshes. These frameworks use the device's camera(s) and sensors to detect real-world surfaces so developers can place virtual content convincingly in the scene.<ref name="AppleDeveloper"/><ref name="arcore"/>
Line 435: Line 435:
<ref name="HoloLensYouTube">{{cite web |url=https://www.youtube.com/watch?v=zff2aQ1RaVo |title=HoloLens - What is Spatial Mapping? |publisher=Microsoft |access-date=2025-10-23}}</ref>
<ref name="HoloLensYouTube">{{cite web |url=https://www.youtube.com/watch?v=zff2aQ1RaVo |title=HoloLens - What is Spatial Mapping? |publisher=Microsoft |access-date=2025-10-23}}</ref>
<ref name="SLAMSystems">{{cite web |url=https://www.sbg-systems.com/glossary/slam-simultaneous-localization-and-mapping/ |title=SLAM - Simultaneous localization and mapping |publisher=SBG Systems |access-date=2025-10-23}}</ref>
<ref name="SLAMSystems">{{cite web |url=https://www.sbg-systems.com/glossary/slam-simultaneous-localization-and-mapping/ |title=SLAM - Simultaneous localization and mapping |publisher=SBG Systems |access-date=2025-10-23}}</ref>
<ref name="MilvusSensors">{{cite web |url=https://milvus.io/ai-quick-reference/what-sensors-eg-accelerometer-gyroscope-are-essential-in-ar-devices |title=What sensors (e.g., accelerometer, gyroscope) are essential in AR devices? |publisher=Milvus |access-date=2025-10-23}}</ref>
<ref name="MilvusSensors">{{cite web |url=https://milvus.io/ai-quick-reference/what-sensors-eg-accelerometer-gyroscope-are-essential-in-ar-devices |title=What sensors (for example accelerometer, gyroscope) are essential in AR devices? |publisher=Milvus |access-date=2025-10-23}}</ref>
<ref name="MathWorksSLAM">{{cite web |url=https://www.mathworks.com/discovery/slam.html |title=What Is SLAM (Simultaneous Localization and Mapping)? |publisher=MathWorks |access-date=2025-10-23}}</ref>
<ref name="MathWorksSLAM">{{cite web |url=https://www.mathworks.com/discovery/slam.html |title=What Is SLAM (Simultaneous Localization and Mapping)? |publisher=MathWorks |access-date=2025-10-23}}</ref>
<ref name="PressbooksSensors">{{cite web |url=https://pressbooks.pub/augmentedrealitymarketing/chapter/sensors-for-arvr/ |title=Sensors for AR/VR |publisher=Pressbooks |access-date=2025-10-23}}</ref>
<ref name="PressbooksSensors">{{cite web |url=https://pressbooks.pub/augmentedrealitymarketing/chapter/sensors-for-arvr/ |title=Sensors for AR/VR |publisher=Pressbooks |access-date=2025-10-23}}</ref>