Jump to content

Spatial anchors: Difference between revisions

m Text replacement - "—" to "-"
Tags: Mobile edit Mobile web edit
No edit summary
Line 6: Line 6:
}}
}}


'''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''6 degrees of freedom (6DOF)''' representation-three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''world-locked''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref>
'''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''[[6 degrees of freedom]] (6DOF)''' representation-three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''[[world-locked]]''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref>


By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''drift'''-where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref>
By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''[[drift]]'''-where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref>


Spatial anchors enable three critical features in AR/MR applications: '''stability''' (ensuring virtual content stays precisely fixed in place), '''[[persistence (computer science)|persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''collaboration''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref>
Spatial anchors enable three critical features in AR/MR applications: '''[[stability]]''' (ensuring virtual content stays precisely fixed in place), '''[[persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''collaboration''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref>


== Definition and core principles ==
== Definition and core principles ==
Line 24: Line 24:
=== Feature-based tracking ===
=== Feature-based tracking ===


Spatial anchors are fundamentally based on '''trackable feature points''' detected in the environment through [[computer vision]] algorithms.<ref name="Reko3D">[https://reko3d.com/blog/spatial-anchors/ Reko3D XR Glossary – Spatial Anchors (2024)]</ref> The AR platform detects distinctive visual features in camera images-such as corners, edges, T-junctions, and texture patterns-using algorithms like ORB (Oriented FAST and Rotated BRIEF), [[SIFT]] (Scale-Invariant Feature Transform), or SURF (Speeded Up Robust Features).<ref name="JaklAnalysis">[https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/ Andreas Jakl – Basics of AR Anchors and Feature Detection]</ref>
Spatial anchors are fundamentally based on '''[[trackable feature points]]''' detected in the environment through [[computer vision]] algorithms.<ref name="Reko3D">[https://reko3d.com/blog/spatial-anchors/ Reko3D XR Glossary – Spatial Anchors (2024)]</ref> The AR platform detects distinctive visual features in camera images-such as corners, edges, T-junctions, and texture patterns-using algorithms like ORB (Oriented FAST and Rotated BRIEF), [[SIFT]] (Scale-Invariant Feature Transform), or SURF (Speeded Up Robust Features).<ref name="JaklAnalysis">[https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/ Andreas Jakl – Basics of AR Anchors and Feature Detection]</ref>


These algorithms extract descriptors from features, create a sparse point cloud, and track features across frames to estimate camera motion. When creating an anchor, the system captures environmental data around the anchor point including visual features and depth information if available, computes a transformation matrix representing the anchor's pose, and stores feature descriptors associated with that location.<ref name="JaklAnalysis"/> The system then continuously updates the anchor's position as its understanding improves, preventing the drift that would otherwise occur.
These algorithms extract descriptors from features, create a sparse point cloud, and track features across frames to estimate camera motion. When creating an anchor, the system captures environmental data around the anchor point including visual features and depth information if available, computes a transformation matrix representing the anchor's pose, and stores feature descriptors associated with that location.<ref name="JaklAnalysis"/> The system then continuously updates the anchor's position as its understanding improves, preventing the drift that would otherwise occur.
Line 30: Line 30:
=== Geospatial anchoring ===
=== Geospatial anchoring ===


In some cases, spatial anchors can be defined using geospatial data such as GPS coordinates and maps, allowing virtual content to be tied to a specific latitude and longitude in the real world.<ref name="Reko3D"/> AR platforms now support '''geospatial anchors''' that let developers place content at global positions-anchoring virtual objects by latitude, longitude, and altitude without needing to scan the immediate surroundings.<ref name="ARCoreGeo">[https://developers.googleblog.com/en/make-the-world-your-canvas-with-the-arcore-geospatial-api Google Developers Blog – ARCore Geospatial API Announcement (2022)]</ref> These anchors leverage [[Visual Positioning System]] (VPS) technology that uses pre-captured imagery databases (such as [[Google Street View]]) with machine learning to extract 3D points and match device camera feeds against VPS models, providing centimeter-level accuracy where available.<ref name="NianticVPS">[https://lightship.dev/docs/ardk/3.6/features/lightship_vps/ Niantic Lightship VPS Documentation – Persistent Location Anchors]</ref>
In some cases, spatial anchors can be defined using geospatial data such as GPS coordinates and maps, allowing virtual content to be tied to a specific latitude and longitude in the real world.<ref name="Reko3D"/> AR platforms now support '''[[geospatial anchors]]''' that let developers place content at global positions-anchoring virtual objects by latitude, longitude, and altitude without needing to scan the immediate surroundings.<ref name="ARCoreGeo">[https://developers.googleblog.com/en/make-the-world-your-canvas-with-the-arcore-geospatial-api Google Developers Blog – ARCore Geospatial API Announcement (2022)]</ref> These anchors leverage [[Visual Positioning System]] (VPS) technology that uses pre-captured imagery databases (such as [[Google Street View]]) with machine learning to extract 3D points and match device camera feeds against VPS models, providing centimeter-level accuracy where available.<ref name="NianticVPS">[https://lightship.dev/docs/ardk/3.6/features/lightship_vps/ Niantic Lightship VPS Documentation – Persistent Location Anchors]</ref>


== Technical implementation ==
== Technical implementation ==
Line 36: Line 36:
=== Visual-Inertial Odometry ===
=== Visual-Inertial Odometry ===


Spatial anchors are implemented on top of the device's environmental tracking capabilities, using techniques like [[simultaneous localization and mapping]] (SLAM) to identify visual feature points or surface geometry in the environment.<ref name="Reko3D"/> Modern AR systems achieve robust tracking through '''Visual-Inertial Odometry (VIO)''', which fuses data from camera sensors with [[Inertial Measurement Unit]] (IMU) sensors-combining accelerometers (measuring linear acceleration) and gyroscopes (measuring rotational velocity).<ref name="VSLAM_MDPI">[https://www.mdpi.com/1424-8220/24/4/1161 MDPI Sensors – Enhancing Outdoor Location-Based AR Anchors Using Visual SLAM]</ref>
Spatial anchors are implemented on top of the device's environmental tracking capabilities, using techniques like [[simultaneous localization and mapping]] (SLAM) to identify visual feature points or surface geometry in the environment.<ref name="Reko3D"/> Modern AR systems achieve robust tracking through '''[[Visual-Inertial Odometry]] (VIO)''', which fuses data from camera sensors with [[Inertial Measurement Unit]] (IMU) sensors-combining accelerometers (measuring linear acceleration) and gyroscopes (measuring rotational velocity).<ref name="VSLAM_MDPI">[https://www.mdpi.com/1424-8220/24/4/1161 MDPI Sensors – Enhancing Outdoor Location-Based AR Anchors Using Visual SLAM]</ref>


Visual tracking provides high accuracy but can fail with motion blur, low texture, or rapid movement, while IMU tracking works well during rapid motion but suffers from drift over time. The fusion of these complementary strengths enables the smooth, real-time motion tracking necessary for stable spatial anchors.<ref name="VIO_Research">[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712971/ PMC – Adaptive Monocular Visual-Inertial SLAM for Real-Time AR Applications]</ref> The VIO process involves IMU preintegration (high-frequency measurements at 100-200 Hz integrated between camera frames), visual feature tracking (extracting features from images at 20-60 Hz), and tightly-coupled fusion that combines visual and inertial measurements in unified optimization.<ref name="VIO_Research"/>
Visual tracking provides high accuracy but can fail with motion blur, low texture, or rapid movement, while IMU tracking works well during rapid motion but suffers from drift over time. The fusion of these complementary strengths enables the smooth, real-time motion tracking necessary for stable spatial anchors.<ref name="VIO_Research">[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712971/ PMC – Adaptive Monocular Visual-Inertial SLAM for Real-Time AR Applications]</ref> The VIO process involves IMU preintegration (high-frequency measurements at 100-200 Hz integrated between camera frames), visual feature tracking (extracting features from images at 20-60 Hz), and tightly-coupled fusion that combines visual and inertial measurements in unified optimization.<ref name="VIO_Research"/>
Line 42: Line 42:
=== Depth sensing technologies ===
=== Depth sensing technologies ===


Depth sensing technologies enhance spatial anchor accuracy through multiple approaches. '''Time-of-Flight (ToF) sensors''' emit infrared light and measure return time for direct depth measurement per pixel, used in devices like [[Microsoft HoloLens]] and some smartphones. '''Structured light''' projects known infrared patterns and analyzes pattern deformation to compute depth, while '''stereo vision''' uses two cameras with known baseline to triangulate depth from disparity.<ref name="DepthSensing">[https://www.slamcore.com/technology/ Slamcore – Next-Level Spatial Intelligence Technology]</ref> These depth sensing methods provide benefits including improved scale estimation (monocular SLAM has scale ambiguity), enhanced plane detection accuracy, more precise anchor placement, and improved tracking robustness in textureless environments.<ref name="DepthSensing"/>
Depth sensing technologies enhance spatial anchor accuracy through multiple approaches. '''[[Time-of-Flight]] (ToF) sensors''' emit infrared light and measure return time for direct depth measurement per pixel, used in devices like [[Microsoft HoloLens]] and some smartphones. '''Structured light''' projects known infrared patterns and analyzes pattern deformation to compute depth, while '''stereo vision''' uses two cameras with known baseline to triangulate depth from disparity.<ref name="DepthSensing">[https://www.slamcore.com/technology/ Slamcore – Next-Level Spatial Intelligence Technology]</ref> These depth sensing methods provide benefits including improved scale estimation (monocular SLAM has scale ambiguity), enhanced plane detection accuracy, more precise anchor placement, and improved tracking robustness in textureless environments.<ref name="DepthSensing"/>


=== Usage guidelines ===
=== Usage guidelines ===
Line 62: Line 62:
=== From marker-based to markerless tracking ===
=== From marker-based to markerless tracking ===


Early AR relied on '''fiducial markers'''-QR codes and printed patterns that required pre-placed markers in environments. Software libraries like '''ARToolKit''', first released in 2000, allowed AR applications to recognize specific physical markers and use them as stable anchor points for virtual content.<ref name="ARToolKit">[https://www.assemblrworld.com/blog/history-of-augmented-reality Assemblr – The History of Augmented Reality]</ref> This marker-based approach was robust but limited the AR experience to locations where these predefined markers could be placed.
Early AR relied on '''[[fiducial markers]]'''-QR codes and printed patterns that required pre-placed markers in environments. Software libraries like '''[[ARToolKit]]''', first released in 2000, allowed AR applications to recognize specific physical markers and use them as stable anchor points for virtual content.<ref name="ARToolKit">[https://www.assemblrworld.com/blog/history-of-augmented-reality Assemblr – The History of Augmented Reality]</ref> This marker-based approach was robust but limited the AR experience to locations where these predefined markers could be placed.


The true breakthrough for modern spatial anchors was the development and consumerization of '''markerless tracking''', powered by SLAM algorithms. This innovation shifted the burden of recognition from a simple physical marker to the AR system's ability to understand the geometry and unique visual features of the entire surrounding environment, allowing anchors to be placed anywhere in a recognized space.<ref name="RecreateFAQ"/>
The true breakthrough for modern spatial anchors was the development and consumerization of '''[[markerless tracking]]''', powered by SLAM algorithms. This innovation shifted the burden of recognition from a simple physical marker to the AR system's ability to understand the geometry and unique visual features of the entire surrounding environment, allowing anchors to be placed anywhere in a recognized space.<ref name="RecreateFAQ"/>


=== Google's Project Tango and ARCore (2014-2018) ===
=== Google's Project Tango and ARCore (2014-2018) ===
Line 74: Line 74:
=== Apple's ARKit revolution (2017-present) ===
=== Apple's ARKit revolution (2017-present) ===


Apple announced '''ARKit on June 5, 2017''' at WWDC in San Jose, releasing it with iOS 11 beta and Xcode 9 beta. Described as the "single most important announcement" from WWDC 2017 by analysts, ARKit instantly created an AR platform for over 100 million iOS devices.<ref name="ARKitHistory">[https://developer.apple.com/augmented-reality/arkit/ Apple Developer – ARKit Overview]</ref> The technology uses Visual Inertial Odometry combining camera sensor data with CoreMotion data, featuring motion tracking, horizontal plane detection, and light estimation on devices running iOS 11 with A9 processor or later.<ref name="ARKitHistory"/>
Apple announced '''[[ARKit]] on June 5, 2017''' at WWDC in San Jose, releasing it with iOS 11 beta and Xcode 9 beta. Described as the "single most important announcement" from WWDC 2017 by analysts, ARKit instantly created an AR platform for over 100 million iOS devices.<ref name="ARKitHistory">[https://developer.apple.com/augmented-reality/arkit/ Apple Developer – ARKit Overview]</ref> The technology uses Visual Inertial Odometry combining camera sensor data with CoreMotion data, featuring motion tracking, horizontal plane detection, and light estimation on devices running iOS 11 with A9 processor or later.<ref name="ARKitHistory"/>


'''ARKit 1.0 publicly released September 19, 2017''' with iOS 11, enabling developers to publish ARKit apps to the App Store. '''ARKit 1.5''' released March 29, 2018 with iOS 11.3, adding 2D image recognition, vertical plane detection, and auto-focus improvements. '''ARKit 2.0''' announced June 2018 at WWDC introduced persistent AR experiences (save and resume), shared AR experiences (multiplayer, collaborative sessions), 2D image tracking of moving objects, and 3D object detection and recognition.<ref name="ARKitHistory"/>
'''ARKit 1.0 publicly released September 19, 2017''' with iOS 11, enabling developers to publish ARKit apps to the App Store. '''ARKit 1.5''' released March 29, 2018 with iOS 11.3, adding 2D image recognition, vertical plane detection, and auto-focus improvements. '''ARKit 2.0''' announced June 2018 at WWDC introduced persistent AR experiences (save and resume), shared AR experiences (multiplayer, collaborative sessions), 2D image tracking of moving objects, and 3D object detection and recognition.<ref name="ARKitHistory"/>
Line 88: Line 88:
== Persistence and sharing ==
== Persistence and sharing ==


A major benefit of spatial anchors is the ability to '''persist''' virtual content across app sessions and to '''share''' content between multiple users in the same location. AR applications can save the state of local anchors (for example, writing them to device storage) and load them in a future session so that previously placed objects reappear in the same physical spot.<ref name="MSLearn3">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Persisting and Sharing Spatial Anchors (2025)]</ref><ref name="MetaAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-basic-tutorial/ Meta Developers – Spatial Anchors Tutorial]</ref> For instance, a user could place virtual furniture in their room, close the app, and later reopen it to find the furniture anchored exactly where it was left.
A major benefit of spatial anchors is the ability to '''[[persist]]''' virtual content across app sessions and to '''[[share]]''' content between multiple users in the same location. AR applications can save the state of local anchors (for example, writing them to device storage) and load them in a future session so that previously placed objects reappear in the same physical spot.<ref name="MSLearn3">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Persisting and Sharing Spatial Anchors (2025)]</ref><ref name="MetaAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-basic-tutorial/ Meta Developers – Spatial Anchors Tutorial]</ref> For instance, a user could place virtual furniture in their room, close the app, and later reopen it to find the furniture anchored exactly where it was left.


=== Local persistence ===
=== Local persistence ===


On [[Microsoft HoloLens]], local anchors can be persisted to disk (via a WorldAnchorStore) so that holograms remain in place between uses of the app on that device.<ref name="MSLearn3"/> Likewise, [[ARKit]] allows saving an '''ARWorldMap''' which contains anchor data to restore a prior session's anchors on the same device. An ARWorldMap is a serialized object that contains a snapshot of the AR session's spatial mapping data, including the positions of all ARAnchor objects that have been created.<ref name="ARKit_WorldMap">[https://developer.apple.com/documentation/arkit/arworldmap Apple Developer – ARWorldMap Documentation]</ref>
On [[Microsoft HoloLens]], local anchors can be persisted to disk (via a WorldAnchorStore) so that holograms remain in place between uses of the app on that device.<ref name="MSLearn3"/> Likewise, [[ARKit]] allows saving an '''[[ARWorldMap]]''' which contains anchor data to restore a prior session's anchors on the same device. An ARWorldMap is a serialized object that contains a snapshot of the AR session's spatial mapping data, including the positions of all ARAnchor objects that have been created.<ref name="ARKit_WorldMap">[https://developer.apple.com/documentation/arkit/arworldmap Apple Developer – ARWorldMap Documentation]</ref>


This ARWorldMap object can be saved to a file on the device and then reloaded in a future session to restore the anchors and their associated content. Limitations of local persistence include accessibility only on the same device, requiring similar lighting and environment conditions, and needing feature-rich environments for relocalization.<ref name="Qualium_Challenges">[https://www.qualium-systems.com/blog/what-are-spatial-anchors-and-why-they-matter/ Qualium Systems – Spatial Anchors Challenges]</ref>
This ARWorldMap object can be saved to a file on the device and then reloaded in a future session to restore the anchors and their associated content. Limitations of local persistence include accessibility only on the same device, requiring similar lighting and environment conditions, and needing feature-rich environments for relocalization.<ref name="Qualium_Challenges">[https://www.qualium-systems.com/blog/what-are-spatial-anchors-and-why-they-matter/ Qualium Systems – Spatial Anchors Challenges]</ref>
Line 100: Line 100:
To enable multi-user experiences, spatial anchors can be shared across devices by using cloud services or networked sessions. A shared anchor serves as a common reference point so that two or more users see virtual content appearing at the same real-world location from their own perspectives. There are several cloud-based anchor services in the industry designed for this purpose.
To enable multi-user experiences, spatial anchors can be shared across devices by using cloud services or networked sessions. A shared anchor serves as a common reference point so that two or more users see virtual content appearing at the same real-world location from their own perspectives. There are several cloud-based anchor services in the industry designed for this purpose.


'''Azure Spatial Anchors''' (ASA) by Microsoft provided a cloud backend to which an application could upload local anchors and later retrieve them on another device (or by another user), enabling collaborative mixed reality across [[HoloLens]], iOS, and Android.<ref name="ASA"/> ASA established a common coordinate frame for shared experiences without requiring any QR codes or prior environmental setup-every device that located the Azure anchor would align its content to the exact same physical spot in the world.<ref name="ASA"/> The service worked by creating a cloud-based representation of an anchor's surrounding environment using feature descriptors (not actual images) that could be accessed by other devices via a unique ID.<ref name="ASA"/>
'''[[Azure Spatial Anchors]]''' (ASA) by Microsoft provided a cloud backend to which an application could upload local anchors and later retrieve them on another device (or by another user), enabling collaborative mixed reality across [[HoloLens]], iOS, and Android.<ref name="ASA"/> ASA established a common coordinate frame for shared experiences without requiring any QR codes or prior environmental setup-every device that located the Azure anchor would align its content to the exact same physical spot in the world.<ref name="ASA"/> The service worked by creating a cloud-based representation of an anchor's surrounding environment using feature descriptors (not actual images) that could be accessed by other devices via a unique ID.<ref name="ASA"/>


Google's [[ARCore]] provides a similar capability with its '''Cloud Anchors''' API (introduced in 2018). Cloud Anchors allow ARCore developers to host anchor data on a Google-managed cloud service, so that anchors (and the attached AR content) can be resolved on different devices and even across Android and iOS.<ref name="GoogleBlog2018">[https://developers.googleblog.com/2020/10/improving-shared-ar-experiences-with-cloud-anchors.html Google Developers Blog – Improving Shared AR Experiences with Cloud Anchors (2020)]</ref><ref name="ARCoreCloud">[https://developers.google.com/ar/develop/java/cloud-anchors/quickstart ARCore Developer Guide – Cloud Anchors Quickstart]</ref>  
Google's [[ARCore]] provides a similar capability with its '''[[Cloud Anchors]]''' API (introduced in 2018). Cloud Anchors allow ARCore developers to host anchor data on a Google-managed cloud service, so that anchors (and the attached AR content) can be resolved on different devices and even across Android and iOS.<ref name="GoogleBlog2018">[https://developers.googleblog.com/2020/10/improving-shared-ar-experiences-with-cloud-anchors.html Google Developers Blog – Improving Shared AR Experiences with Cloud Anchors (2020)]</ref><ref name="ARCoreCloud">[https://developers.google.com/ar/develop/java/cloud-anchors/quickstart ARCore Developer Guide – Cloud Anchors Quickstart]</ref>  


The '''hosting process''' involves a user placing an anchor in their environment, with the ARCore SDK uploading visual data describing the features around the anchor to Google's servers (discarded within 24 hours for privacy). The service processes this data and returns a unique Cloud Anchor ID.<ref name="ARCoreCloud"/> The '''resolving process''' has other users' devices use this ID to query Google's service, which compares the visual features of their current environment with the stored data to find the anchor's original position.<ref name="ARCoreCloud"/>
The '''[[hosting process]]''' involves a user placing an anchor in their environment, with the ARCore SDK uploading visual data describing the features around the anchor to Google's servers (discarded within 24 hours for privacy). The service processes this data and returns a unique Cloud Anchor ID.<ref name="ARCoreCloud"/> The '''[[resolving process]]''' has other users' devices use this ID to query Google's service, which compares the visual features of their current environment with the stored data to find the anchor's original position.<ref name="ARCoreCloud"/>


Initially, Cloud Anchors had a 24-hour time limit (anchors would expire after a day), but in 2020 Google launched '''persistent Cloud Anchors''' which can last much longer (on the order of 1 to 365 days) to support content that users can return to over time.<ref name="GoogleBlog2018"/> Using ARCore's cloud service, multiple people running the same app can place and see shared AR objects in a scene-for example, leaving AR graffiti that others can discover later at the same real-world location.
Initially, Cloud Anchors had a 24-hour time limit (anchors would expire after a day), but in 2020 Google launched '''persistent Cloud Anchors''' which can last much longer (on the order of 1 to 365 days) to support content that users can return to over time.<ref name="GoogleBlog2018"/> Using ARCore's cloud service, multiple people running the same app can place and see shared AR objects in a scene-for example, leaving AR graffiti that others can discover later at the same real-world location.


In 2022, Google expanded this concept with the ARCore '''Geospatial API''', which leverages global mapping data (Street View imagery) to let developers anchor content by latitude and longitude in many cities worldwide.<ref name="ARCoreGeo"/> This effectively creates an '''AR cloud''' of world-anchored content: end-users can point their device at a known location and instantly retrieve virtual content that is tied to that place. Three types of geospatial anchors are supported: '''WGS84 Anchors''' (absolute latitude/longitude/altitude coordinates), '''Terrain Anchors''' (latitude/longitude with altitude relative to ground determined by VPS), and '''Rooftop Anchors''' (latitude/longitude with altitude relative to building rooftops).<ref name="ARCoreGeo"/>
In 2022, Google expanded this concept with the ARCore '''[[Geospatial API]]''', which leverages global mapping data (Street View imagery) to let developers anchor content by latitude and longitude in many cities worldwide.<ref name="ARCoreGeo"/> This effectively creates an '''[[AR cloud]]''' of world-anchored content: end-users can point their device at a known location and instantly retrieve virtual content that is tied to that place. Three types of geospatial anchors are supported: '''[[WGS84 Anchors]]''' (absolute latitude/longitude/altitude coordinates), '''[[Terrain Anchors]]''' (latitude/longitude with altitude relative to ground determined by VPS), and '''[[Rooftop Anchors]]''' (latitude/longitude with altitude relative to building rooftops).<ref name="ARCoreGeo"/>


=== Platform-specific sharing capabilities ===
=== Platform-specific sharing capabilities ===


Other companies have their own spatial anchor solutions. [[Meta]]'s VR/AR platform (for devices like the Meta Quest) supports spatial anchors that can be saved locally on the headset to persist virtual objects in the user's physical space, and it offers '''Shared Spatial Anchors''' for local multi-user experiences (allowing people in the same room to see each other's anchored content).<ref name="MetaAnchors"/> Sharing anchors on the Quest platform requires a third-party networking solution, such as Photon, to handle the transmission of anchor data between users.<ref name="Meta_SharedAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-shared-spatial-anchors/ Meta for Developers – Shared Spatial Anchors]</ref>
Other companies have their own spatial anchor solutions. [[Meta]]'s VR/AR platform (for devices like the Meta Quest) supports spatial anchors that can be saved locally on the headset to persist virtual objects in the user's physical space, and it offers '''[[Shared Spatial Anchors]]''' for local multi-user experiences (allowing people in the same room to see each other's anchored content).<ref name="MetaAnchors"/> Sharing anchors on the Quest platform requires a third-party networking solution, such as Photon, to handle the transmission of anchor data between users.<ref name="Meta_SharedAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-shared-spatial-anchors/ Meta for Developers – Shared Spatial Anchors]</ref>


[[Magic Leap]] 2 supports spatial anchors through its '''Spaces''' feature: users map an environment (creating a "Space") and can place anchors within that space which remain persistent between sessions on that device.<ref name="MagicLeap"/> The Magic Leap 2 can store several localized spaces (each containing anchors), though cloud sharing of those anchors was part of the now-deprecated Magic Leap "AR Cloud" platform.
[[Magic Leap]] 2 supports spatial anchors through its '''[[Spaces]]''' feature: users map an environment (creating a "Space") and can place anchors within that space which remain persistent between sessions on that device.<ref name="MagicLeap"/> The Magic Leap 2 can store several localized spaces (each containing anchors), though cloud sharing of those anchors was part of the now-deprecated Magic Leap "AR Cloud" platform.


Niantic's [[Lightship]] platform uses a Visual Positioning System (VPS) to allow persistent '''location-based anchors''': developers can place anchors at specific real-world locations (such as a public landmark), and any user who comes to that location with a VPS-enabled app can discover and display the anchored content there.<ref name="NianticVPS"/> Niantic's Lightship VPS provides centimeter-level accuracy for AR device localization with over 1 million VPS-enabled locations worldwide.<ref name="Niantic_Enterprise">[https://www.nianticspatial.com/blog/spatial-anchors-enterprise-readiness Niantic Spatial – Spatial Anchors Enterprise Readiness (2025)]</ref>
Niantic's [[Lightship]] platform uses a Visual Positioning System (VPS) to allow persistent '''[[location-based anchors]]''': developers can place anchors at specific real-world locations (such as a public landmark), and any user who comes to that location with a VPS-enabled app can discover and display the anchored content there.<ref name="NianticVPS"/> Niantic's Lightship VPS provides centimeter-level accuracy for AR device localization with over 1 million VPS-enabled locations worldwide.<ref name="Niantic_Enterprise">[https://www.nianticspatial.com/blog/spatial-anchors-enterprise-readiness Niantic Spatial – Spatial Anchors Enterprise Readiness (2025)]</ref>


== Support in major AR frameworks ==
== Support in major AR frameworks ==