Jump to content

Spatial anchors: Difference between revisions

Undo revision 36638 by Xinreality (talk)
Tag: Undo
m Text replacement - "—" to "-"
Tags: Mobile edit Mobile web edit
Line 6: Line 6:
}}
}}


'''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''6 degrees of freedom (6DOF)''' representation—three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''world-locked''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref>
'''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''6 degrees of freedom (6DOF)''' representation-three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''world-locked''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref>


By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''drift'''—where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref>
By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''drift'''-where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref>


Spatial anchors enable three critical features in AR/MR applications: '''stability''' (ensuring virtual content stays precisely fixed in place), '''[[persistence (computer science)|persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''collaboration''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref>
Spatial anchors enable three critical features in AR/MR applications: '''stability''' (ensuring virtual content stays precisely fixed in place), '''[[persistence (computer science)|persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''collaboration''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref>
Line 18: Line 18:
Each spatial anchor establishes its own adjustable coordinate system that is defined relative to the unique features of the surrounding physical environment.<ref name="MSLearn"/> When an AR device renders a virtual object, it does so within this anchor-relative coordinate system. This technique provides the most precise and stable positioning for the object because the system is not relying on a single, global coordinate system that is prone to drift over large distances or time.<ref name="MSLearn"/>
Each spatial anchor establishes its own adjustable coordinate system that is defined relative to the unique features of the surrounding physical environment.<ref name="MSLearn"/> When an AR device renders a virtual object, it does so within this anchor-relative coordinate system. This technique provides the most precise and stable positioning for the object because the system is not relying on a single, global coordinate system that is prone to drift over large distances or time.<ref name="MSLearn"/>


The AR system continuously updates its understanding of the environment through its sensors, making small, ongoing adjustments to the anchor's pose (position and orientation) to keep it perfectly aligned with the real world. These micro-corrections are what keep the attached virtual content world-locked and stable.<ref name="MSLearn"/> A critical characteristic is that the world coordinate system is not static—as the AR system's understanding of the environment changes, it adjusts its model of the world to keep things consistent, with the anchor's transformation automatically updated each frame to compensate for changes in the system's world model.<ref name="ARCoreConcepts">[https://developers.google.com/ar/reference/c/group/concepts Google ARCore – Concepts Documentation]</ref>
The AR system continuously updates its understanding of the environment through its sensors, making small, ongoing adjustments to the anchor's pose (position and orientation) to keep it perfectly aligned with the real world. These micro-corrections are what keep the attached virtual content world-locked and stable.<ref name="MSLearn"/> A critical characteristic is that the world coordinate system is not static-as the AR system's understanding of the environment changes, it adjusts its model of the world to keep things consistent, with the anchor's transformation automatically updated each frame to compensate for changes in the system's world model.<ref name="ARCoreConcepts">[https://developers.google.com/ar/reference/c/group/concepts Google ARCore – Concepts Documentation]</ref>


The pose is mathematically represented as a '''4x4 homogeneous transformation matrix''' combining rotation (3×3 matrix) and translation (3×1 vector), describing the rigid transformation from the object's local coordinate space to the world coordinate space.<ref name="ARCoreConcepts"/> This world-locking capability is essential for any AR experience that is not confined to a small, stationary area, creating a distributed network of local, stable reference points.<ref name="MSLearn"/>
The pose is mathematically represented as a '''4x4 homogeneous transformation matrix''' combining rotation (3×3 matrix) and translation (3×1 vector), describing the rigid transformation from the object's local coordinate space to the world coordinate space.<ref name="ARCoreConcepts"/> This world-locking capability is essential for any AR experience that is not confined to a small, stationary area, creating a distributed network of local, stable reference points.<ref name="MSLearn"/>
Line 24: Line 24:
=== Feature-based tracking ===
=== Feature-based tracking ===


Spatial anchors are fundamentally based on '''trackable feature points''' detected in the environment through [[computer vision]] algorithms.<ref name="Reko3D">[https://reko3d.com/blog/spatial-anchors/ Reko3D XR Glossary – Spatial Anchors (2024)]</ref> The AR platform detects distinctive visual features in camera images—such as corners, edges, T-junctions, and texture patterns—using algorithms like ORB (Oriented FAST and Rotated BRIEF), [[SIFT]] (Scale-Invariant Feature Transform), or SURF (Speeded Up Robust Features).<ref name="JaklAnalysis">[https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/ Andreas Jakl – Basics of AR Anchors and Feature Detection]</ref>
Spatial anchors are fundamentally based on '''trackable feature points''' detected in the environment through [[computer vision]] algorithms.<ref name="Reko3D">[https://reko3d.com/blog/spatial-anchors/ Reko3D XR Glossary – Spatial Anchors (2024)]</ref> The AR platform detects distinctive visual features in camera images-such as corners, edges, T-junctions, and texture patterns-using algorithms like ORB (Oriented FAST and Rotated BRIEF), [[SIFT]] (Scale-Invariant Feature Transform), or SURF (Speeded Up Robust Features).<ref name="JaklAnalysis">[https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/ Andreas Jakl – Basics of AR Anchors and Feature Detection]</ref>


These algorithms extract descriptors from features, create a sparse point cloud, and track features across frames to estimate camera motion. When creating an anchor, the system captures environmental data around the anchor point including visual features and depth information if available, computes a transformation matrix representing the anchor's pose, and stores feature descriptors associated with that location.<ref name="JaklAnalysis"/> The system then continuously updates the anchor's position as its understanding improves, preventing the drift that would otherwise occur.
These algorithms extract descriptors from features, create a sparse point cloud, and track features across frames to estimate camera motion. When creating an anchor, the system captures environmental data around the anchor point including visual features and depth information if available, computes a transformation matrix representing the anchor's pose, and stores feature descriptors associated with that location.<ref name="JaklAnalysis"/> The system then continuously updates the anchor's position as its understanding improves, preventing the drift that would otherwise occur.
Line 30: Line 30:
=== Geospatial anchoring ===
=== Geospatial anchoring ===


In some cases, spatial anchors can be defined using geospatial data such as GPS coordinates and maps, allowing virtual content to be tied to a specific latitude and longitude in the real world.<ref name="Reko3D"/> AR platforms now support '''geospatial anchors''' that let developers place content at global positions—anchoring virtual objects by latitude, longitude, and altitude without needing to scan the immediate surroundings.<ref name="ARCoreGeo">[https://developers.googleblog.com/en/make-the-world-your-canvas-with-the-arcore-geospatial-api Google Developers Blog – ARCore Geospatial API Announcement (2022)]</ref> These anchors leverage [[Visual Positioning System]] (VPS) technology that uses pre-captured imagery databases (such as [[Google Street View]]) with machine learning to extract 3D points and match device camera feeds against VPS models, providing centimeter-level accuracy where available.<ref name="NianticVPS">[https://lightship.dev/docs/ardk/3.6/features/lightship_vps/ Niantic Lightship VPS Documentation – Persistent Location Anchors]</ref>
In some cases, spatial anchors can be defined using geospatial data such as GPS coordinates and maps, allowing virtual content to be tied to a specific latitude and longitude in the real world.<ref name="Reko3D"/> AR platforms now support '''geospatial anchors''' that let developers place content at global positions-anchoring virtual objects by latitude, longitude, and altitude without needing to scan the immediate surroundings.<ref name="ARCoreGeo">[https://developers.googleblog.com/en/make-the-world-your-canvas-with-the-arcore-geospatial-api Google Developers Blog – ARCore Geospatial API Announcement (2022)]</ref> These anchors leverage [[Visual Positioning System]] (VPS) technology that uses pre-captured imagery databases (such as [[Google Street View]]) with machine learning to extract 3D points and match device camera feeds against VPS models, providing centimeter-level accuracy where available.<ref name="NianticVPS">[https://lightship.dev/docs/ardk/3.6/features/lightship_vps/ Niantic Lightship VPS Documentation – Persistent Location Anchors]</ref>


== Technical implementation ==
== Technical implementation ==
Line 36: Line 36:
=== Visual-Inertial Odometry ===
=== Visual-Inertial Odometry ===


Spatial anchors are implemented on top of the device's environmental tracking capabilities, using techniques like [[simultaneous localization and mapping]] (SLAM) to identify visual feature points or surface geometry in the environment.<ref name="Reko3D"/> Modern AR systems achieve robust tracking through '''Visual-Inertial Odometry (VIO)''', which fuses data from camera sensors with [[Inertial Measurement Unit]] (IMU) sensors—combining accelerometers (measuring linear acceleration) and gyroscopes (measuring rotational velocity).<ref name="VSLAM_MDPI">[https://www.mdpi.com/1424-8220/24/4/1161 MDPI Sensors – Enhancing Outdoor Location-Based AR Anchors Using Visual SLAM]</ref>
Spatial anchors are implemented on top of the device's environmental tracking capabilities, using techniques like [[simultaneous localization and mapping]] (SLAM) to identify visual feature points or surface geometry in the environment.<ref name="Reko3D"/> Modern AR systems achieve robust tracking through '''Visual-Inertial Odometry (VIO)''', which fuses data from camera sensors with [[Inertial Measurement Unit]] (IMU) sensors-combining accelerometers (measuring linear acceleration) and gyroscopes (measuring rotational velocity).<ref name="VSLAM_MDPI">[https://www.mdpi.com/1424-8220/24/4/1161 MDPI Sensors – Enhancing Outdoor Location-Based AR Anchors Using Visual SLAM]</ref>


Visual tracking provides high accuracy but can fail with motion blur, low texture, or rapid movement, while IMU tracking works well during rapid motion but suffers from drift over time. The fusion of these complementary strengths enables the smooth, real-time motion tracking necessary for stable spatial anchors.<ref name="VIO_Research">[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712971/ PMC – Adaptive Monocular Visual-Inertial SLAM for Real-Time AR Applications]</ref> The VIO process involves IMU preintegration (high-frequency measurements at 100-200 Hz integrated between camera frames), visual feature tracking (extracting features from images at 20-60 Hz), and tightly-coupled fusion that combines visual and inertial measurements in unified optimization.<ref name="VIO_Research"/>
Visual tracking provides high accuracy but can fail with motion blur, low texture, or rapid movement, while IMU tracking works well during rapid motion but suffers from drift over time. The fusion of these complementary strengths enables the smooth, real-time motion tracking necessary for stable spatial anchors.<ref name="VIO_Research">[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712971/ PMC – Adaptive Monocular Visual-Inertial SLAM for Real-Time AR Applications]</ref> The VIO process involves IMU preintegration (high-frequency measurements at 100-200 Hz integrated between camera frames), visual feature tracking (extracting features from images at 20-60 Hz), and tightly-coupled fusion that combines visual and inertial measurements in unified optimization.<ref name="VIO_Research"/>
Line 46: Line 46:
=== Usage guidelines ===
=== Usage guidelines ===


Once created, a spatial anchor remains fixed at its real-world location and is not meant to be moved arbitrarily. Anchors are generally used for virtual elements intended to stay in one place (such as a holographic sign positioned on a wall) rather than for moving objects.<ref name="MSLearn2">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors Usage Guidelines (2025)]</ref> Many AR frameworks treat anchors as relatively heavy-weight tracking objects—the system will continually adjust and refine the anchor's position to keep it aligned with the environment, which can incur computational cost.<ref name="UnityAnchor">[https://docs.unity3d.com/Packages/[email protected]/manual/anchor-manager.html Unity AR Foundation Documentation – ARAnchorManager]</ref>
Once created, a spatial anchor remains fixed at its real-world location and is not meant to be moved arbitrarily. Anchors are generally used for virtual elements intended to stay in one place (such as a holographic sign positioned on a wall) rather than for moving objects.<ref name="MSLearn2">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors Usage Guidelines (2025)]</ref> Many AR frameworks treat anchors as relatively heavy-weight tracking objects-the system will continually adjust and refine the anchor's position to keep it aligned with the environment, which can incur computational cost.<ref name="UnityAnchor">[https://docs.unity3d.com/Packages/[email protected]/manual/anchor-manager.html Unity AR Foundation Documentation – ARAnchorManager]</ref>


Developers are advised to use spatial anchors sparingly and only for content that truly needs persistent world-locking, and to avoid attaching fast-moving or dynamic objects to anchors.<ref name="UnityAnchor"/><ref name="MSLearn2"/> A critical limitation documented by Microsoft is that '''holograms greater than 3 meters from anchor origin experience noticeable positional errors''' due to lever-arm effects—angular errors are small when close to the anchor but become magnified at distance.<ref name="MSLearn2"/> The recommended solution is creating new anchors for distant objects rather than using a single anchor, treating the 3-meter range as an effective radius for anchor accuracy.<ref name="MSLearn2"/>
Developers are advised to use spatial anchors sparingly and only for content that truly needs persistent world-locking, and to avoid attaching fast-moving or dynamic objects to anchors.<ref name="UnityAnchor"/><ref name="MSLearn2"/> A critical limitation documented by Microsoft is that '''holograms greater than 3 meters from anchor origin experience noticeable positional errors''' due to lever-arm effects-angular errors are small when close to the anchor but become magnified at distance.<ref name="MSLearn2"/> The recommended solution is creating new anchors for distant objects rather than using a single anchor, treating the 3-meter range as an effective radius for anchor accuracy.<ref name="MSLearn2"/>


== History and development ==
== History and development ==
Line 62: Line 62:
=== From marker-based to markerless tracking ===
=== From marker-based to markerless tracking ===


Early AR relied on '''fiducial markers'''—QR codes and printed patterns that required pre-placed markers in environments. Software libraries like '''ARToolKit''', first released in 2000, allowed AR applications to recognize specific physical markers and use them as stable anchor points for virtual content.<ref name="ARToolKit">[https://www.assemblrworld.com/blog/history-of-augmented-reality Assemblr – The History of Augmented Reality]</ref> This marker-based approach was robust but limited the AR experience to locations where these predefined markers could be placed.
Early AR relied on '''fiducial markers'''-QR codes and printed patterns that required pre-placed markers in environments. Software libraries like '''ARToolKit''', first released in 2000, allowed AR applications to recognize specific physical markers and use them as stable anchor points for virtual content.<ref name="ARToolKit">[https://www.assemblrworld.com/blog/history-of-augmented-reality Assemblr – The History of Augmented Reality]</ref> This marker-based approach was robust but limited the AR experience to locations where these predefined markers could be placed.


The true breakthrough for modern spatial anchors was the development and consumerization of '''markerless tracking''', powered by SLAM algorithms. This innovation shifted the burden of recognition from a simple physical marker to the AR system's ability to understand the geometry and unique visual features of the entire surrounding environment, allowing anchors to be placed anywhere in a recognized space.<ref name="RecreateFAQ"/>
The true breakthrough for modern spatial anchors was the development and consumerization of '''markerless tracking''', powered by SLAM algorithms. This innovation shifted the burden of recognition from a simple physical marker to the AR system's ability to understand the geometry and unique visual features of the entire surrounding environment, allowing anchors to be placed anywhere in a recognized space.<ref name="RecreateFAQ"/>
Line 100: Line 100:
To enable multi-user experiences, spatial anchors can be shared across devices by using cloud services or networked sessions. A shared anchor serves as a common reference point so that two or more users see virtual content appearing at the same real-world location from their own perspectives. There are several cloud-based anchor services in the industry designed for this purpose.
To enable multi-user experiences, spatial anchors can be shared across devices by using cloud services or networked sessions. A shared anchor serves as a common reference point so that two or more users see virtual content appearing at the same real-world location from their own perspectives. There are several cloud-based anchor services in the industry designed for this purpose.


'''Azure Spatial Anchors''' (ASA) by Microsoft provided a cloud backend to which an application could upload local anchors and later retrieve them on another device (or by another user), enabling collaborative mixed reality across [[HoloLens]], iOS, and Android.<ref name="ASA"/> ASA established a common coordinate frame for shared experiences without requiring any QR codes or prior environmental setup—every device that located the Azure anchor would align its content to the exact same physical spot in the world.<ref name="ASA"/> The service worked by creating a cloud-based representation of an anchor's surrounding environment using feature descriptors (not actual images) that could be accessed by other devices via a unique ID.<ref name="ASA"/>
'''Azure Spatial Anchors''' (ASA) by Microsoft provided a cloud backend to which an application could upload local anchors and later retrieve them on another device (or by another user), enabling collaborative mixed reality across [[HoloLens]], iOS, and Android.<ref name="ASA"/> ASA established a common coordinate frame for shared experiences without requiring any QR codes or prior environmental setup-every device that located the Azure anchor would align its content to the exact same physical spot in the world.<ref name="ASA"/> The service worked by creating a cloud-based representation of an anchor's surrounding environment using feature descriptors (not actual images) that could be accessed by other devices via a unique ID.<ref name="ASA"/>


Google's [[ARCore]] provides a similar capability with its '''Cloud Anchors''' API (introduced in 2018). Cloud Anchors allow ARCore developers to host anchor data on a Google-managed cloud service, so that anchors (and the attached AR content) can be resolved on different devices and even across Android and iOS.<ref name="GoogleBlog2018">[https://developers.googleblog.com/2020/10/improving-shared-ar-experiences-with-cloud-anchors.html Google Developers Blog – Improving Shared AR Experiences with Cloud Anchors (2020)]</ref><ref name="ARCoreCloud">[https://developers.google.com/ar/develop/java/cloud-anchors/quickstart ARCore Developer Guide – Cloud Anchors Quickstart]</ref>  
Google's [[ARCore]] provides a similar capability with its '''Cloud Anchors''' API (introduced in 2018). Cloud Anchors allow ARCore developers to host anchor data on a Google-managed cloud service, so that anchors (and the attached AR content) can be resolved on different devices and even across Android and iOS.<ref name="GoogleBlog2018">[https://developers.googleblog.com/2020/10/improving-shared-ar-experiences-with-cloud-anchors.html Google Developers Blog – Improving Shared AR Experiences with Cloud Anchors (2020)]</ref><ref name="ARCoreCloud">[https://developers.google.com/ar/develop/java/cloud-anchors/quickstart ARCore Developer Guide – Cloud Anchors Quickstart]</ref>  
Line 106: Line 106:
The '''hosting process''' involves a user placing an anchor in their environment, with the ARCore SDK uploading visual data describing the features around the anchor to Google's servers (discarded within 24 hours for privacy). The service processes this data and returns a unique Cloud Anchor ID.<ref name="ARCoreCloud"/> The '''resolving process''' has other users' devices use this ID to query Google's service, which compares the visual features of their current environment with the stored data to find the anchor's original position.<ref name="ARCoreCloud"/>
The '''hosting process''' involves a user placing an anchor in their environment, with the ARCore SDK uploading visual data describing the features around the anchor to Google's servers (discarded within 24 hours for privacy). The service processes this data and returns a unique Cloud Anchor ID.<ref name="ARCoreCloud"/> The '''resolving process''' has other users' devices use this ID to query Google's service, which compares the visual features of their current environment with the stored data to find the anchor's original position.<ref name="ARCoreCloud"/>


Initially, Cloud Anchors had a 24-hour time limit (anchors would expire after a day), but in 2020 Google launched '''persistent Cloud Anchors''' which can last much longer (on the order of 1 to 365 days) to support content that users can return to over time.<ref name="GoogleBlog2018"/> Using ARCore's cloud service, multiple people running the same app can place and see shared AR objects in a scene—for example, leaving AR graffiti that others can discover later at the same real-world location.
Initially, Cloud Anchors had a 24-hour time limit (anchors would expire after a day), but in 2020 Google launched '''persistent Cloud Anchors''' which can last much longer (on the order of 1 to 365 days) to support content that users can return to over time.<ref name="GoogleBlog2018"/> Using ARCore's cloud service, multiple people running the same app can place and see shared AR objects in a scene-for example, leaving AR graffiti that others can discover later at the same real-world location.


In 2022, Google expanded this concept with the ARCore '''Geospatial API''', which leverages global mapping data (Street View imagery) to let developers anchor content by latitude and longitude in many cities worldwide.<ref name="ARCoreGeo"/> This effectively creates an '''AR cloud''' of world-anchored content: end-users can point their device at a known location and instantly retrieve virtual content that is tied to that place. Three types of geospatial anchors are supported: '''WGS84 Anchors''' (absolute latitude/longitude/altitude coordinates), '''Terrain Anchors''' (latitude/longitude with altitude relative to ground determined by VPS), and '''Rooftop Anchors''' (latitude/longitude with altitude relative to building rooftops).<ref name="ARCoreGeo"/>
In 2022, Google expanded this concept with the ARCore '''Geospatial API''', which leverages global mapping data (Street View imagery) to let developers anchor content by latitude and longitude in many cities worldwide.<ref name="ARCoreGeo"/> This effectively creates an '''AR cloud''' of world-anchored content: end-users can point their device at a known location and instantly retrieve virtual content that is tied to that place. Three types of geospatial anchors are supported: '''WGS84 Anchors''' (absolute latitude/longitude/altitude coordinates), '''Terrain Anchors''' (latitude/longitude with altitude relative to ground determined by VPS), and '''Rooftop Anchors''' (latitude/longitude with altitude relative to building rooftops).<ref name="ARCoreGeo"/>
Line 126: Line 126:
ARKit represents anchors with the `ARAnchor` class, which Apple defines as "a real-world position and orientation that can be used for placing objects in an AR scene."<ref name="ARKitAnchor"/> ARKit provides '''10+ specific anchor types''', all inheriting from the base ARAnchor class: '''ARAnchor''' (base class for position and orientation), '''ARPlaneAnchor''' (horizontal and vertical surfaces with semantic classification), '''ARImageAnchor''' (tracked images with scale estimation), '''ARObjectAnchor''' (real-world 3D objects), '''ARBodyAnchor''' (human body positions), '''ARFaceAnchor''' (facial tracking), '''ARGeoAnchor''' (geographic locations using GPS and visual positioning, ARKit 4.0+), '''ARMeshAnchor''' (polygonal mesh geometry using LiDAR, ARKit 3.5+), '''ARParticipantAnchor''' (multi-user AR experiences), '''AREnvironmentProbeAnchor''' (environmental lighting), and '''ARAppClipCodeAnchor''' (App Clip Code tracking, ARKit 4.0+).<ref name="ARKitDocs">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer – ARAnchor Documentation]</ref>
ARKit represents anchors with the `ARAnchor` class, which Apple defines as "a real-world position and orientation that can be used for placing objects in an AR scene."<ref name="ARKitAnchor"/> ARKit provides '''10+ specific anchor types''', all inheriting from the base ARAnchor class: '''ARAnchor''' (base class for position and orientation), '''ARPlaneAnchor''' (horizontal and vertical surfaces with semantic classification), '''ARImageAnchor''' (tracked images with scale estimation), '''ARObjectAnchor''' (real-world 3D objects), '''ARBodyAnchor''' (human body positions), '''ARFaceAnchor''' (facial tracking), '''ARGeoAnchor''' (geographic locations using GPS and visual positioning, ARKit 4.0+), '''ARMeshAnchor''' (polygonal mesh geometry using LiDAR, ARKit 3.5+), '''ARParticipantAnchor''' (multi-user AR experiences), '''AREnvironmentProbeAnchor''' (environmental lighting), and '''ARAppClipCodeAnchor''' (App Clip Code tracking, ARKit 4.0+).<ref name="ARKitDocs">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer – ARAnchor Documentation]</ref>


ARKit automatically generates some anchors (for example, when planar surfaces or images are detected, it creates plane anchors or image anchors), and developers can add their own anchors at arbitrary positions. ARKit does not have a built-in cloud anchor service, but it allows multi-user sharing by merging AR sessions: an app can share a map (which contains anchors) with another device to synchronize their coordinate space. In ARKit 4, Apple introduced '''Location Anchors (ARGeoAnchor)''', which use high-resolution Apple Maps data in certain cities to anchor content to real-world coordinates (latitude, longitude, altitude)—enabling experiences where AR content is tied to specific landmarks or locations.<ref name="AppleLocation">[https://www.apple.com/newsroom/2020/06/ios-14-offers-new-features/ Apple Newsroom – ARKit 4 Location Anchors (2020)]</ref>
ARKit automatically generates some anchors (for example, when planar surfaces or images are detected, it creates plane anchors or image anchors), and developers can add their own anchors at arbitrary positions. ARKit does not have a built-in cloud anchor service, but it allows multi-user sharing by merging AR sessions: an app can share a map (which contains anchors) with another device to synchronize their coordinate space. In ARKit 4, Apple introduced '''Location Anchors (ARGeoAnchor)''', which use high-resolution Apple Maps data in certain cities to anchor content to real-world coordinates (latitude, longitude, altitude)-enabling experiences where AR content is tied to specific landmarks or locations.<ref name="AppleLocation">[https://www.apple.com/newsroom/2020/06/ios-14-offers-new-features/ Apple Newsroom – ARKit 4 Location Anchors (2020)]</ref>


Core anchor properties include `transform` (simd_float4x4 matrix encoding position, orientation, and scale relative to world coordinate space), `name` (string identifier), and `identifier` (unique UUID). ARKit uses visual-inertial odometry for tracking with automatic pose updates as tracking improves, and supports relocalization (iOS 11.3+) for session resumption after interruptions.<ref name="ARKitDocs"/>
Core anchor properties include `transform` (simd_float4x4 matrix encoding position, orientation, and scale relative to world coordinate space), `name` (string identifier), and `identifier` (unique UUID). ARKit uses visual-inertial odometry for tracking with automatic pose updates as tracking improves, and supports relocalization (iOS 11.3+) for session resumption after interruptions.<ref name="ARKitDocs"/>
Line 142: Line 142:
On HoloLens and other WMR devices, spatial anchors (sometimes called ''world anchors'') are used to keep holograms fixed in place. The device continuously refines the anchors using its spatial mapping and tracking system. Microsoft provided the '''Azure Spatial Anchors''' cloud service for cross-platform persistence: developers could create an anchor on a HoloLens (as a `CloudSpatialAnchor` in the SDK), upload it to Azure, then locate that same anchor on an iPhone or Android device, allowing a shared experience.<ref name="ASA"/>
On HoloLens and other WMR devices, spatial anchors (sometimes called ''world anchors'') are used to keep holograms fixed in place. The device continuously refines the anchors using its spatial mapping and tracking system. Microsoft provided the '''Azure Spatial Anchors''' cloud service for cross-platform persistence: developers could create an anchor on a HoloLens (as a `CloudSpatialAnchor` in the SDK), upload it to Azure, then locate that same anchor on an iPhone or Android device, allowing a shared experience.<ref name="ASA"/>


Locally, HoloLens also allowed saving anchors to disk and exporting/importing them (via a WorldAnchorTransferBatch in Unity) to persist holograms between app runs.<ref name="MSLearn3"/> The HoloLens system has a limit on how far content can be from an anchor before stability decreases—Microsoft recommends keeping holograms within a few meters of their anchor for best accuracy, specifically noting that holograms greater than 3 meters from anchor origin experience noticeable positional errors.<ref name="MSLearn2"/>
Locally, HoloLens also allowed saving anchors to disk and exporting/importing them (via a WorldAnchorTransferBatch in Unity) to persist holograms between app runs.<ref name="MSLearn3"/> The HoloLens system has a limit on how far content can be from an anchor before stability decreases-Microsoft recommends keeping holograms within a few meters of their anchor for best accuracy, specifically noting that holograms greater than 3 meters from anchor origin experience noticeable positional errors.<ref name="MSLearn2"/>


Azure Spatial Anchors provided unique features including '''Azure Active Directory integration''' for enterprise-grade access control, '''Azure Security''' with confidential computing, backup, and monitoring, and '''IoT Integration''' combined with Azure Digital Twins for spatial intelligence. Privacy protections included data segregation (each customer controls own spatial anchor data store with no cross-subscription sharing), image-less anchor creation (images processed locally with only feature descriptors transmitted to cloud), and pose-only anchor queries (returning only 6DoF pose by default without image data).<ref name="ASA_Privacy">[https://azure.microsoft.com/en-us/blog/azure-spatial-anchors-privacy-and-security/ Microsoft Azure Blog – Azure Spatial Anchors Privacy and Security]</ref>
Azure Spatial Anchors provided unique features including '''Azure Active Directory integration''' for enterprise-grade access control, '''Azure Security''' with confidential computing, backup, and monitoring, and '''IoT Integration''' combined with Azure Digital Twins for spatial intelligence. Privacy protections included data segregation (each customer controls own spatial anchor data store with no cross-subscription sharing), image-less anchor creation (images processed locally with only feature descriptors transmitted to cloud), and pose-only anchor queries (returning only 6DoF pose by default without image data).<ref name="ASA_Privacy">[https://azure.microsoft.com/en-us/blog/azure-spatial-anchors-privacy-and-security/ Microsoft Azure Blog – Azure Spatial Anchors Privacy and Security]</ref>
Line 166: Line 166:
The `ARAnchorManager` manages the lifecycle of all anchors in the scene, providing an `anchorsChanged` event reporting added, updated, and removed anchors. The `AttachAnchor(ARPlane plane, Pose pose)` method attaches anchors to detected planes. The system translates between Unity world space and AR session space, compensating for tracking loss and session reinitialization.<ref name="UnityAnchor"/>
The `ARAnchorManager` manages the lifecycle of all anchors in the scene, providing an `anchorsChanged` event reporting added, updated, and removed anchors. The `AttachAnchor(ARPlane plane, Pose pose)` method attaches anchors to detected planes. The system translates between Unity world space and AR session space, compensating for tracking loss and session reinitialization.<ref name="UnityAnchor"/>


Best practices emphasize that anchors are resource-intensive on most platforms, recommending avoiding multiple anchors within 2 meters of each other, removing anchors when no longer needed, and considering disabling plane detection after initial scan. Platform-specific behaviors vary—Meta Quest planes are static (not dynamically updated) while ARKit/ARCore planes dynamically update at runtime, and different platforms may have varying anchor limits.<ref name="UnityAnchor"/>
Best practices emphasize that anchors are resource-intensive on most platforms, recommending avoiding multiple anchors within 2 meters of each other, removing anchors when no longer needed, and considering disabling plane detection after initial scan. Platform-specific behaviors vary-Meta Quest planes are static (not dynamically updated) while ARKit/ARCore planes dynamically update at runtime, and different platforms may have varying anchor limits.<ref name="UnityAnchor"/>


=== OpenXR standard ===
=== OpenXR standard ===
Line 223: Line 223:
'''Pokemon Go''' by Niantic uses AR+ Mode with ARCore/ARKit to anchor Pokemon to physical locations with centimeter accuracy through their Visual Positioning System. The '''Pokemon Playgrounds''' feature enables shared persistent AR experiences at PokéStops and Gyms where trainers can place Pokemon for others to discover. With 91+ million active players and 176+ million copies sold, Pokemon Go demonstrated the viability of world-scale AR gaming.<ref name="PokemonPlaygrounds">[https://nianticlabs.com/news/pokemon-playgrounds Niantic Labs – Pokemon Playgrounds Announcement]</ref>
'''Pokemon Go''' by Niantic uses AR+ Mode with ARCore/ARKit to anchor Pokemon to physical locations with centimeter accuracy through their Visual Positioning System. The '''Pokemon Playgrounds''' feature enables shared persistent AR experiences at PokéStops and Gyms where trainers can place Pokemon for others to discover. With 91+ million active players and 176+ million copies sold, Pokemon Go demonstrated the viability of world-scale AR gaming.<ref name="PokemonPlaygrounds">[https://nianticlabs.com/news/pokemon-playgrounds Niantic Labs – Pokemon Playgrounds Announcement]</ref>


'''Minecraft Earth''' utilized Azure Spatial Anchors and PlayFab integration to create life-size AR experiences, allowing players to build and share persistent structures in the real world. The game featured "Adventures"—small slices of Minecraft worlds rendered in life-size AR on sidewalks and parks. Though later retired, it demonstrated the technical feasibility of large-scale collaborative AR gaming.<ref name="MinecraftEarth">[https://news.microsoft.com/features/minecraft-earth-azure-spatial-anchors/ Microsoft News – Minecraft Earth Technology]</ref>
'''Minecraft Earth''' utilized Azure Spatial Anchors and PlayFab integration to create life-size AR experiences, allowing players to build and share persistent structures in the real world. The game featured "Adventures"-small slices of Minecraft worlds rendered in life-size AR on sidewalks and parks. Though later retired, it demonstrated the technical feasibility of large-scale collaborative AR gaming.<ref name="MinecraftEarth">[https://news.microsoft.com/features/minecraft-earth-azure-spatial-anchors/ Microsoft News – Minecraft Earth Technology]</ref>


=== Collaborative design and productivity ===
=== Collaborative design and productivity ===
Line 271: Line 271:
=== Environmental constraints ===
=== Environmental constraints ===


'''Feature-scarce environments''' present the primary challenge—empty white walls, uniform floors, and large glass areas lack visual features for anchor creation. Anchors fail to create or match reliably in these conditions. The practical impact is severe in modern office buildings with minimalist design.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations">[https://xreal.gitbook.io/nrsdk/development/spatial-anchor XREAL Developer Docs – Spatial Anchor Limitations]</ref>
'''Feature-scarce environments''' present the primary challenge-empty white walls, uniform floors, and large glass areas lack visual features for anchor creation. Anchors fail to create or match reliably in these conditions. The practical impact is severe in modern office buildings with minimalist design.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations">[https://xreal.gitbook.io/nrsdk/development/spatial-anchor XREAL Developer Docs – Spatial Anchor Limitations]</ref>


'''Lighting conditions''' cause disruptions when abrupt changes occur (lights turning off/on, moving between dark and bright areas). Anchors may "jump" or temporarily disappear during adjustment. Documentation recommends even lighting and avoiding dramatic lighting changes for optimal anchor stability.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations"/>
'''Lighting conditions''' cause disruptions when abrupt changes occur (lights turning off/on, moving between dark and bright areas). Anchors may "jump" or temporarily disappear during adjustment. Documentation recommends even lighting and avoiding dramatic lighting changes for optimal anchor stability.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations"/>


'''Dynamic environments''' with moving objects (people, equipment) occlude reference features, causing tracking issues and anchor instability. This is particularly problematic in crowded spaces or busy warehouses where the environment constantly changes. '''Surface requirements''' exclude transparent, semi-transparent, or reflective surfaces—mirrors, glass, and glossy surfaces prevent the system from detecting and tracking features effectively.<ref name="Qualium_Challenges"/>
'''Dynamic environments''' with moving objects (people, equipment) occlude reference features, causing tracking issues and anchor instability. This is particularly problematic in crowded spaces or busy warehouses where the environment constantly changes. '''Surface requirements''' exclude transparent, semi-transparent, or reflective surfaces-mirrors, glass, and glossy surfaces prevent the system from detecting and tracking features effectively.<ref name="Qualium_Challenges"/>


=== Drift and accuracy issues ===
=== Drift and accuracy issues ===
Line 281: Line 281:
'''Scale drift''' occurs as small tracking errors accumulate over time, causing "drift" where virtual objects slowly diverge from intended positions. Drift becomes noticeable at greater than 0.2 meters deviation. Mitigation strategies include regular anchor updates, recalibration, and creating fresh anchors when drift exceeds acceptable thresholds.<ref name="Qualium_Challenges"/><ref name="MagicLeapDrift">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Docs – Anchor Drift Mitigation]</ref>
'''Scale drift''' occurs as small tracking errors accumulate over time, causing "drift" where virtual objects slowly diverge from intended positions. Drift becomes noticeable at greater than 0.2 meters deviation. Mitigation strategies include regular anchor updates, recalibration, and creating fresh anchors when drift exceeds acceptable thresholds.<ref name="Qualium_Challenges"/><ref name="MagicLeapDrift">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Docs – Anchor Drift Mitigation]</ref>


Sharing anchors between devices can introduce additional alignment errors, where each user sees the virtual content in a slightly different position—displacement can be subtle (4–5 cm) but often significant enough (up to 20 cm in some cases) to break the illusion of stability, especially for applications requiring high precision.<ref name="Meta_Drift">[https://communityforums.atmeta.com/discussions/dev-unity/spatial-anchors-issues Meta Community Forums – Spatial Anchors Issues]</ref>
Sharing anchors between devices can introduce additional alignment errors, where each user sees the virtual content in a slightly different position-displacement can be subtle (4–5 cm) but often significant enough (up to 20 cm in some cases) to break the illusion of stability, especially for applications requiring high precision.<ref name="Meta_Drift">[https://communityforums.atmeta.com/discussions/dev-unity/spatial-anchors-issues Meta Community Forums – Spatial Anchors Issues]</ref>


'''Latency issues''' in anchor stabilization affect user experience—matching saved data to real-time visuals should occur in under 5 seconds ideally. Poor performance leads to user frustration and abandonment. Platform variance shows ARKit is typically faster than cloud anchors (ARCore/Azure Spatial Anchors) for initial anchor establishment.<ref name="Qualium_Challenges"/>
'''Latency issues''' in anchor stabilization affect user experience-matching saved data to real-time visuals should occur in under 5 seconds ideally. Poor performance leads to user frustration and abandonment. Platform variance shows ARKit is typically faster than cloud anchors (ARCore/Azure Spatial Anchors) for initial anchor establishment.<ref name="Qualium_Challenges"/>


=== Scalability and performance ===
=== Scalability and performance ===
Line 293: Line 293:
A major systemic challenge is the lack of a universal standard for spatial anchors. Anchors created using Apple's ARKit cannot be natively understood by a device running Google's ARCore, and vice versa.<ref name="Qualium_Challenges"/> This fragmentation creates walled ecosystems, forcing developers to either choose a single platform or invest significant resources in building complex, custom backend solutions to bridge the gap.
A major systemic challenge is the lack of a universal standard for spatial anchors. Anchors created using Apple's ARKit cannot be natively understood by a device running Google's ARCore, and vice versa.<ref name="Qualium_Challenges"/> This fragmentation creates walled ecosystems, forcing developers to either choose a single platform or invest significant resources in building complex, custom backend solutions to bridge the gap.


The discontinuation of Azure Spatial Anchors—a major cross-platform solution—further underscores the risks for developers who rely on a single proprietary, centralized service for this critical functionality.<ref name="ASA_Sunset"/><ref name="MS_Sunset_Impact">[https://www.mdpi.com/2076-3417/15/13/6959 MDPI – Cross-Platform Framework for Synchronizing Spatial Anchors]</ref>
The discontinuation of Azure Spatial Anchors-a major cross-platform solution-further underscores the risks for developers who rely on a single proprietary, centralized service for this critical functionality.<ref name="ASA_Sunset"/><ref name="MS_Sunset_Impact">[https://www.mdpi.com/2076-3417/15/13/6959 MDPI – Cross-Platform Framework for Synchronizing Spatial Anchors]</ref>


=== Storage and quota limitations ===
=== Storage and quota limitations ===
Line 299: Line 299:
'''Storage limitations''' constrain large-scale deployments. ARCore Cloud Anchors offer free tier with 24-hour persistence and business tier with 365-day persistence. Azure Spatial Anchors provided default quota of 1,000 anchors per account (scalable with additional costs). Meta Quest anchors consume approximately 2.5MB each and expire after 24 hours in cloud (local persist until deleted). Management challenges include risk of losing UUID references creating "orphaned" anchors that consume storage but cannot be accessed.<ref name="Qualium_Challenges"/>
'''Storage limitations''' constrain large-scale deployments. ARCore Cloud Anchors offer free tier with 24-hour persistence and business tier with 365-day persistence. Azure Spatial Anchors provided default quota of 1,000 anchors per account (scalable with additional costs). Meta Quest anchors consume approximately 2.5MB each and expire after 24 hours in cloud (local persist until deleted). Management challenges include risk of losing UUID references creating "orphaned" anchors that consume storage but cannot be accessed.<ref name="Qualium_Challenges"/>


'''Network dependencies''' mean cloud anchors require stable internet connectivity—hosting and resolving anchors fails without connection. This limits applicability in industrial environments with restricted network access or outdoor locations with poor coverage. Hybrid approaches combining local anchors for offline scenarios with cloud anchors for cross-device sharing provide partial mitigation.<ref name="Qualium_Challenges"/>
'''Network dependencies''' mean cloud anchors require stable internet connectivity-hosting and resolving anchors fails without connection. This limits applicability in industrial environments with restricted network access or outdoor locations with poor coverage. Hybrid approaches combining local anchors for offline scenarios with cloud anchors for cross-device sharing provide partial mitigation.<ref name="Qualium_Challenges"/>


=== Relocalization challenges ===
=== Relocalization challenges ===


For persistent anchors to function, the user's device must successfully "re-localize"—that is, recognize the current environment by matching it to a previously saved map. This process can be a significant point of friction and failure. If the user starts the application from a different position than where the map was created, or if the environment has changed too much, re-localization may fail, and the persistent content will not appear.<ref name="SABIAT_Paper">[https://benswift.me/assets/documents/preprints/he_et_al_2021_spatial_anchor_based_indoor_asset_tracking.pdf CSIRO Research – Spatial-Anchor-Based Indoor Asset Tracking]</ref>
For persistent anchors to function, the user's device must successfully "re-localize"-that is, recognize the current environment by matching it to a previously saved map. This process can be a significant point of friction and failure. If the user starts the application from a different position than where the map was created, or if the environment has changed too much, re-localization may fail, and the persistent content will not appear.<ref name="SABIAT_Paper">[https://benswift.me/assets/documents/preprints/he_et_al_2021_spatial_anchor_based_indoor_asset_tracking.pdf CSIRO Research – Spatial-Anchor-Based Indoor Asset Tracking]</ref>


Designing an intuitive user experience that effectively guides a user to perform an adequate scan of an area to create a high-quality anchor, or to move their device to a location with enough visual information to re-localize, remains a key challenge for AR application developers.<ref name="ARCore_Unity_Persistent">[https://docs.unity3d.com/Packages/[email protected]/manual/features/anchors/persistent-anchors.html Unity Documentation – Persistent Anchors for ARCore]</ref>
Designing an intuitive user experience that effectively guides a user to perform an adequate scan of an area to create a high-quality anchor, or to move their device to a location with enough visual information to re-localize, remains a key challenge for AR application developers.<ref name="ARCore_Unity_Persistent">[https://docs.unity3d.com/Packages/[email protected]/manual/features/anchors/persistent-anchors.html Unity Documentation – Persistent Anchors for ARCore]</ref>
Line 341: Line 341:
=== Privacy protection measures ===
=== Privacy protection measures ===


Major platforms have implemented privacy protections. Azure Spatial Anchors followed three principles: '''Data segregation''' (each customer controls their own spatial anchor data store with no data sharing between subscriptions), '''Image-less anchor creation''' (images processed locally on edge devices with only derived format feature descriptors transmitted to cloud—original images never stored), and '''Pose-only anchor queries''' (returning only 6DoF pose by default with no image data returned to querying devices).<ref name="ASA_Privacy"/>
Major platforms have implemented privacy protections. Azure Spatial Anchors followed three principles: '''Data segregation''' (each customer controls their own spatial anchor data store with no data sharing between subscriptions), '''Image-less anchor creation''' (images processed locally on edge devices with only derived format feature descriptors transmitted to cloud-original images never stored), and '''Pose-only anchor queries''' (returning only 6DoF pose by default with no image data returned to querying devices).<ref name="ASA_Privacy"/>


Best practices include '''minimizing data collection''' (collecting only necessary spatial data, using pose information instead of full imagery), '''providing user control''' (explicit consent for spatial data capture, user access to view/delete stored anchors), and '''implementing technical safeguards''' (obfuscation techniques including pseudonyms and spatial cloaking, differential privacy for location data, secure cloud environments with FISMA and FedRAMP compliance).<ref name="Privacy_ITIF"/>
Best practices include '''minimizing data collection''' (collecting only necessary spatial data, using pose information instead of full imagery), '''providing user control''' (explicit consent for spatial data capture, user access to view/delete stored anchors), and '''implementing technical safeguards''' (obfuscation techniques including pseudonyms and spatial cloaking, differential privacy for location data, secure cloud environments with FISMA and FedRAMP compliance).<ref name="Privacy_ITIF"/>
Line 353: Line 353:
The AR Cloud can be envisioned as a real-time 3D map of the world, continuously updated by millions of devices and accessible to any AR application.<ref name="Future_ABI">[https://www.abiresearch.com/press/ar-cloud-promises-high-value-future-proof-ar-use-cases-needs-help-enabling-technologies/ ABI Research – AR Cloud Promises High Value, Future Proof AR Use Cases]</ref> This shared digital layer would serve as a universal foundation for anchoring digital content, transforming today's isolated spatial anchors into the equivalent of hyperlinks on a "Spatial Web" or "Spatial Internet."<ref name="Niantic_Enterprise"/> In this paradigm, AR experiences would no longer be confined to individual apps but would exist as a persistent, collaborative, and globally-scaled information layer over reality.
The AR Cloud can be envisioned as a real-time 3D map of the world, continuously updated by millions of devices and accessible to any AR application.<ref name="Future_ABI">[https://www.abiresearch.com/press/ar-cloud-promises-high-value-future-proof-ar-use-cases-needs-help-enabling-technologies/ ABI Research – AR Cloud Promises High Value, Future Proof AR Use Cases]</ref> This shared digital layer would serve as a universal foundation for anchoring digital content, transforming today's isolated spatial anchors into the equivalent of hyperlinks on a "Spatial Web" or "Spatial Internet."<ref name="Niantic_Enterprise"/> In this paradigm, AR experiences would no longer be confined to individual apps but would exist as a persistent, collaborative, and globally-scaled information layer over reality.


Niantic characterizes the AR Cloud as "one of the most important infrastructures in the history of computing" and the current moment as a "land grab" similar to early internet—presence, visibility, and brand relevance in the physical world will define market positions.<ref name="Niantic_Enterprise"/>
Niantic characterizes the AR Cloud as "one of the most important infrastructures in the history of computing" and the current moment as a "land grab" similar to early internet-presence, visibility, and brand relevance in the physical world will define market positions.<ref name="Niantic_Enterprise"/>


=== Enabling technologies ===
=== Enabling technologies ===
Line 367: Line 367:
=== AI and spatial computing convergence ===
=== AI and spatial computing convergence ===


'''On-device machine learning''' in spatial computing enables Apple's Create ML to train object tracking models directly from command line—converting 3D model files through ML training for visionOS tracking. '''Multimodal AI''' bridges spatial and business data, processing text, image, audio, and spatial data types simultaneously.<ref name="Deloitte_Trends">[https://www2.deloitte.com/us/en/insights/focus/tech-trends/2025/tech-trends-future-of-spatial-computing.html Deloitte Tech Trends 2025 – Future of Spatial Computing]</ref>
'''On-device machine learning''' in spatial computing enables Apple's Create ML to train object tracking models directly from command line-converting 3D model files through ML training for visionOS tracking. '''Multimodal AI''' bridges spatial and business data, processing text, image, audio, and spatial data types simultaneously.<ref name="Deloitte_Trends">[https://www2.deloitte.com/us/en/insights/focus/tech-trends/2025/tech-trends-future-of-spatial-computing.html Deloitte Tech Trends 2025 – Future of Spatial Computing]</ref>


'''Spatial AI''' at the intersection of spatial computing and AI/ML includes edge AI processors for local inference, spatial cameras and sensors for AR, hardware-in-the-loop simulation platforms, and geospatial data processing units. Deloitte Tech Trends 2025 predicts spatial computing will converge with '''agentic AI systems''' that are context-aware and capable of executing functions proactively, able to serve the right content at the right time without explicit commands, and multimodal while processing spatial, visual, and business data simultaneously.<ref name="Deloitte_Trends"/>
'''Spatial AI''' at the intersection of spatial computing and AI/ML includes edge AI processors for local inference, spatial cameras and sensors for AR, hardware-in-the-loop simulation platforms, and geospatial data processing units. Deloitte Tech Trends 2025 predicts spatial computing will converge with '''agentic AI systems''' that are context-aware and capable of executing functions proactively, able to serve the right content at the right time without explicit commands, and multimodal while processing spatial, visual, and business data simultaneously.<ref name="Deloitte_Trends"/>