Jump to content

Spatial anchors: Difference between revisions

m Text replacement - "—" to "-"
Tags: Mobile edit Mobile web edit
No edit summary
 
(8 intermediate revisions by the same user not shown)
Line 6: Line 6:
}}
}}


'''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''6 degrees of freedom (6DOF)''' representation-three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''world-locked''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref>
[[File:Spatial anchors1.png|right|300px]]
[[File:Spatial anchors2.png|right|300px]]
[[File:Spatial anchors4.png|right|300px]]


By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''drift'''-where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate What is a spatial anchor?]</ref>
'''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''[[6 degrees of freedom]] (6DOF)''' representation-three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''[[world-locked]]''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting ARAnchor ARKit Overview (2019)]</ref>


Spatial anchors enable three critical features in AR/MR applications: '''stability''' (ensuring virtual content stays precisely fixed in place), '''[[persistence (computer science)|persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''collaboration''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref>
By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''[[drift]]'''-where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref>
 
Spatial anchors enable three critical features in AR/MR applications: '''[[stability]]''' (ensuring virtual content stays precisely fixed in place), '''[[persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''[[collaboration]]''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref>


== Definition and core principles ==
== Definition and core principles ==
Line 24: Line 28:
=== Feature-based tracking ===
=== Feature-based tracking ===


Spatial anchors are fundamentally based on '''trackable feature points''' detected in the environment through [[computer vision]] algorithms.<ref name="Reko3D">[https://reko3d.com/blog/spatial-anchors/ Reko3D XR Glossary – Spatial Anchors (2024)]</ref> The AR platform detects distinctive visual features in camera images-such as corners, edges, T-junctions, and texture patterns-using algorithms like ORB (Oriented FAST and Rotated BRIEF), [[SIFT]] (Scale-Invariant Feature Transform), or SURF (Speeded Up Robust Features).<ref name="JaklAnalysis">[https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/ Andreas Jakl – Basics of AR Anchors and Feature Detection]</ref>
Spatial anchors are fundamentally based on '''[[trackable feature points]]''' detected in the environment through [[computer vision]] algorithms.<ref name="Reko3D">[https://reko3d.com/blog/spatial-anchors/ Reko3D XR Glossary – Spatial Anchors (2024)]</ref> The AR platform detects distinctive visual features in camera images-such as corners, edges, T-junctions, and texture patterns-using algorithms like ORB (Oriented FAST and Rotated BRIEF), [[SIFT]] (Scale-Invariant Feature Transform), or SURF (Speeded Up Robust Features).<ref name="JaklAnalysis">[https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/ Andreas Jakl – Basics of AR Anchors and Feature Detection]</ref>


These algorithms extract descriptors from features, create a sparse point cloud, and track features across frames to estimate camera motion. When creating an anchor, the system captures environmental data around the anchor point including visual features and depth information if available, computes a transformation matrix representing the anchor's pose, and stores feature descriptors associated with that location.<ref name="JaklAnalysis"/> The system then continuously updates the anchor's position as its understanding improves, preventing the drift that would otherwise occur.
These algorithms extract descriptors from features, create a sparse point cloud, and track features across frames to estimate camera motion. When creating an anchor, the system captures environmental data around the anchor point including visual features and depth information if available, computes a transformation matrix representing the anchor's pose, and stores feature descriptors associated with that location.<ref name="JaklAnalysis"/> The system then continuously updates the anchor's position as its understanding improves, preventing the drift that would otherwise occur.
Line 30: Line 34:
=== Geospatial anchoring ===
=== Geospatial anchoring ===


In some cases, spatial anchors can be defined using geospatial data such as GPS coordinates and maps, allowing virtual content to be tied to a specific latitude and longitude in the real world.<ref name="Reko3D"/> AR platforms now support '''geospatial anchors''' that let developers place content at global positions-anchoring virtual objects by latitude, longitude, and altitude without needing to scan the immediate surroundings.<ref name="ARCoreGeo">[https://developers.googleblog.com/en/make-the-world-your-canvas-with-the-arcore-geospatial-api Google Developers Blog – ARCore Geospatial API Announcement (2022)]</ref> These anchors leverage [[Visual Positioning System]] (VPS) technology that uses pre-captured imagery databases (such as [[Google Street View]]) with machine learning to extract 3D points and match device camera feeds against VPS models, providing centimeter-level accuracy where available.<ref name="NianticVPS">[https://lightship.dev/docs/ardk/3.6/features/lightship_vps/ Niantic Lightship VPS Documentation – Persistent Location Anchors]</ref>
In some cases, spatial anchors can be defined using geospatial data such as GPS coordinates and maps, allowing virtual content to be tied to a specific latitude and longitude in the real world.<ref name="Reko3D"/> AR platforms now support '''[[geospatial anchors]]''' that let developers place content at global positions-anchoring virtual objects by latitude, longitude, and altitude without needing to scan the immediate surroundings.<ref name="ARCoreGeo">[https://developers.googleblog.com/en/make-the-world-your-canvas-with-the-arcore-geospatial-api Google Developers Blog – ARCore Geospatial API Announcement (2022)]</ref> These anchors leverage [[Visual Positioning System]] (VPS) technology that uses pre-captured imagery databases (such as [[Google Street View]]) with machine learning to extract 3D points and match device camera feeds against VPS models, providing centimeter-level accuracy where available.<ref name="NianticVPS">[https://lightship.dev/docs/ardk/3.6/features/lightship_vps/ Niantic Lightship VPS Documentation – Persistent Location Anchors]</ref>


== Technical implementation ==
== Technical implementation ==
Line 36: Line 40:
=== Visual-Inertial Odometry ===
=== Visual-Inertial Odometry ===


Spatial anchors are implemented on top of the device's environmental tracking capabilities, using techniques like [[simultaneous localization and mapping]] (SLAM) to identify visual feature points or surface geometry in the environment.<ref name="Reko3D"/> Modern AR systems achieve robust tracking through '''Visual-Inertial Odometry (VIO)''', which fuses data from camera sensors with [[Inertial Measurement Unit]] (IMU) sensors-combining accelerometers (measuring linear acceleration) and gyroscopes (measuring rotational velocity).<ref name="VSLAM_MDPI">[https://www.mdpi.com/1424-8220/24/4/1161 MDPI Sensors – Enhancing Outdoor Location-Based AR Anchors Using Visual SLAM]</ref>
Spatial anchors are implemented on top of the device's environmental tracking capabilities, using techniques like [[simultaneous localization and mapping]] (SLAM) to identify visual feature points or surface geometry in the environment.<ref name="Reko3D"/> Modern AR systems achieve robust tracking through '''[[Visual-Inertial Odometry]] (VIO)''', which fuses data from camera sensors with [[Inertial Measurement Unit]] (IMU) sensors-combining accelerometers (measuring linear acceleration) and gyroscopes (measuring rotational velocity).<ref name="VSLAM_MDPI">[https://www.mdpi.com/1424-8220/24/4/1161 MDPI Sensors – Enhancing Outdoor Location-Based AR Anchors Using Visual SLAM]</ref>


Visual tracking provides high accuracy but can fail with motion blur, low texture, or rapid movement, while IMU tracking works well during rapid motion but suffers from drift over time. The fusion of these complementary strengths enables the smooth, real-time motion tracking necessary for stable spatial anchors.<ref name="VIO_Research">[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712971/ PMC – Adaptive Monocular Visual-Inertial SLAM for Real-Time AR Applications]</ref> The VIO process involves IMU preintegration (high-frequency measurements at 100-200 Hz integrated between camera frames), visual feature tracking (extracting features from images at 20-60 Hz), and tightly-coupled fusion that combines visual and inertial measurements in unified optimization.<ref name="VIO_Research"/>
Visual tracking provides high accuracy but can fail with motion blur, low texture, or rapid movement, while IMU tracking works well during rapid motion but suffers from drift over time. The fusion of these complementary strengths enables the smooth, real-time motion tracking necessary for stable spatial anchors.<ref name="VIO_Research">[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712971/ PMC – Adaptive Monocular Visual-Inertial SLAM for Real-Time AR Applications]</ref> The VIO process involves IMU preintegration (high-frequency measurements at 100-200 Hz integrated between camera frames), visual feature tracking (extracting features from images at 20-60 Hz), and tightly-coupled fusion that combines visual and inertial measurements in unified optimization.<ref name="VIO_Research"/>
Line 42: Line 46:
=== Depth sensing technologies ===
=== Depth sensing technologies ===


Depth sensing technologies enhance spatial anchor accuracy through multiple approaches. '''Time-of-Flight (ToF) sensors''' emit infrared light and measure return time for direct depth measurement per pixel, used in devices like [[Microsoft HoloLens]] and some smartphones. '''Structured light''' projects known infrared patterns and analyzes pattern deformation to compute depth, while '''stereo vision''' uses two cameras with known baseline to triangulate depth from disparity.<ref name="DepthSensing">[https://www.slamcore.com/technology/ Slamcore – Next-Level Spatial Intelligence Technology]</ref> These depth sensing methods provide benefits including improved scale estimation (monocular SLAM has scale ambiguity), enhanced plane detection accuracy, more precise anchor placement, and improved tracking robustness in textureless environments.<ref name="DepthSensing"/>
Depth sensing technologies enhance spatial anchor accuracy through multiple approaches. '''[[Time-of-Flight]] (ToF) sensors''' emit infrared light and measure return time for direct depth measurement per pixel, used in devices like [[Microsoft HoloLens]] and some smartphones. '''Structured light''' projects known infrared patterns and analyzes pattern deformation to compute depth, while '''stereo vision''' uses two cameras with known baseline to triangulate depth from disparity.<ref name="DepthSensing">[https://www.slamcore.com/technology/ Slamcore – Next-Level Spatial Intelligence Technology]</ref> These depth sensing methods provide benefits including improved scale estimation (monocular SLAM has scale ambiguity), enhanced plane detection accuracy, more precise anchor placement, and improved tracking robustness in textureless environments.<ref name="DepthSensing"/>


=== Usage guidelines ===
=== Usage guidelines ===
Line 62: Line 66:
=== From marker-based to markerless tracking ===
=== From marker-based to markerless tracking ===


Early AR relied on '''fiducial markers'''-QR codes and printed patterns that required pre-placed markers in environments. Software libraries like '''ARToolKit''', first released in 2000, allowed AR applications to recognize specific physical markers and use them as stable anchor points for virtual content.<ref name="ARToolKit">[https://www.assemblrworld.com/blog/history-of-augmented-reality Assemblr – The History of Augmented Reality]</ref> This marker-based approach was robust but limited the AR experience to locations where these predefined markers could be placed.
Early AR relied on '''[[fiducial markers]]'''-QR codes and printed patterns that required pre-placed markers in environments. Software libraries like '''[[ARToolKit]]''', first released in 2000, allowed AR applications to recognize specific physical markers and use them as stable anchor points for virtual content.<ref name="ARToolKit">[https://www.assemblrworld.com/blog/history-of-augmented-reality Assemblr – The History of Augmented Reality]</ref> This marker-based approach was robust but limited the AR experience to locations where these predefined markers could be placed.


The true breakthrough for modern spatial anchors was the development and consumerization of '''markerless tracking''', powered by SLAM algorithms. This innovation shifted the burden of recognition from a simple physical marker to the AR system's ability to understand the geometry and unique visual features of the entire surrounding environment, allowing anchors to be placed anywhere in a recognized space.<ref name="RecreateFAQ"/>
The true breakthrough for modern spatial anchors was the development and consumerization of '''[[markerless tracking]]''', powered by SLAM algorithms. This innovation shifted the burden of recognition from a simple physical marker to the AR system's ability to understand the geometry and unique visual features of the entire surrounding environment, allowing anchors to be placed anywhere in a recognized space.<ref name="RecreateFAQ"/>


=== Google's Project Tango and ARCore (2014-2018) ===
=== Google's Project Tango and ARCore (2014-2018) ===
Line 74: Line 78:
=== Apple's ARKit revolution (2017-present) ===
=== Apple's ARKit revolution (2017-present) ===


Apple announced '''ARKit on June 5, 2017''' at WWDC in San Jose, releasing it with iOS 11 beta and Xcode 9 beta. Described as the "single most important announcement" from WWDC 2017 by analysts, ARKit instantly created an AR platform for over 100 million iOS devices.<ref name="ARKitHistory">[https://developer.apple.com/augmented-reality/arkit/ Apple Developer – ARKit Overview]</ref> The technology uses Visual Inertial Odometry combining camera sensor data with CoreMotion data, featuring motion tracking, horizontal plane detection, and light estimation on devices running iOS 11 with A9 processor or later.<ref name="ARKitHistory"/>
Apple announced '''[[ARKit]] on June 5, 2017''' at WWDC in San Jose, releasing it with iOS 11 beta and Xcode 9 beta. Described as the "single most important announcement" from WWDC 2017 by analysts, ARKit instantly created an AR platform for over 100 million iOS devices.<ref name="ARKitHistory">[https://developer.apple.com/augmented-reality/arkit/ Apple Developer – ARKit Overview]</ref> The technology uses Visual Inertial Odometry combining camera sensor data with CoreMotion data, featuring motion tracking, horizontal plane detection, and light estimation on devices running iOS 11 with A9 processor or later.<ref name="ARKitHistory"/>


'''ARKit 1.0 publicly released September 19, 2017''' with iOS 11, enabling developers to publish ARKit apps to the App Store. '''ARKit 1.5''' released March 29, 2018 with iOS 11.3, adding 2D image recognition, vertical plane detection, and auto-focus improvements. '''ARKit 2.0''' announced June 2018 at WWDC introduced persistent AR experiences (save and resume), shared AR experiences (multiplayer, collaborative sessions), 2D image tracking of moving objects, and 3D object detection and recognition.<ref name="ARKitHistory"/>
'''ARKit 1.0 publicly released September 19, 2017''' with iOS 11, enabling developers to publish ARKit apps to the App Store. '''ARKit 1.5''' released March 29, 2018 with iOS 11.3, adding 2D image recognition, vertical plane detection, and auto-focus improvements. '''ARKit 2.0''' announced June 2018 at WWDC introduced persistent AR experiences (save and resume), shared AR experiences (multiplayer, collaborative sessions), 2D image tracking of moving objects, and 3D object detection and recognition.<ref name="ARKitHistory"/>
Line 88: Line 92:
== Persistence and sharing ==
== Persistence and sharing ==


A major benefit of spatial anchors is the ability to '''persist''' virtual content across app sessions and to '''share''' content between multiple users in the same location. AR applications can save the state of local anchors (for example, writing them to device storage) and load them in a future session so that previously placed objects reappear in the same physical spot.<ref name="MSLearn3">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Persisting and Sharing Spatial Anchors (2025)]</ref><ref name="MetaAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-basic-tutorial/ Meta Developers – Spatial Anchors Tutorial]</ref> For instance, a user could place virtual furniture in their room, close the app, and later reopen it to find the furniture anchored exactly where it was left.
A major benefit of spatial anchors is the ability to '''[[persist]]''' virtual content across app sessions and to '''[[share]]''' content between multiple users in the same location. AR applications can save the state of local anchors (for example, writing them to device storage) and load them in a future session so that previously placed objects reappear in the same physical spot.<ref name="MSLearn3">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Persisting and Sharing Spatial Anchors (2025)]</ref><ref name="MetaAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-basic-tutorial/ Meta Developers – Spatial Anchors Tutorial]</ref> For instance, a user could place virtual furniture in their room, close the app, and later reopen it to find the furniture anchored exactly where it was left.


=== Local persistence ===
=== Local persistence ===


On [[Microsoft HoloLens]], local anchors can be persisted to disk (via a WorldAnchorStore) so that holograms remain in place between uses of the app on that device.<ref name="MSLearn3"/> Likewise, [[ARKit]] allows saving an '''ARWorldMap''' which contains anchor data to restore a prior session's anchors on the same device. An ARWorldMap is a serialized object that contains a snapshot of the AR session's spatial mapping data, including the positions of all ARAnchor objects that have been created.<ref name="ARKit_WorldMap">[https://developer.apple.com/documentation/arkit/arworldmap Apple Developer – ARWorldMap Documentation]</ref>
On [[Microsoft HoloLens]], local anchors can be persisted to disk (via a WorldAnchorStore) so that holograms remain in place between uses of the app on that device.<ref name="MSLearn3"/> Likewise, [[ARKit]] allows saving an '''[[ARWorldMap]]''' which contains anchor data to restore a prior session's anchors on the same device. An ARWorldMap is a serialized object that contains a snapshot of the AR session's spatial mapping data, including the positions of all ARAnchor objects that have been created.<ref name="ARKit_WorldMap">[https://developer.apple.com/documentation/arkit/arworldmap Apple Developer – ARWorldMap Documentation]</ref>


This ARWorldMap object can be saved to a file on the device and then reloaded in a future session to restore the anchors and their associated content. Limitations of local persistence include accessibility only on the same device, requiring similar lighting and environment conditions, and needing feature-rich environments for relocalization.<ref name="Qualium_Challenges">[https://www.qualium-systems.com/blog/what-are-spatial-anchors-and-why-they-matter/ Qualium Systems – Spatial Anchors Challenges]</ref>
This ARWorldMap object can be saved to a file on the device and then reloaded in a future session to restore the anchors and their associated content. Limitations of local persistence include accessibility only on the same device, requiring similar lighting and environment conditions, and needing feature-rich environments for relocalization.<ref name="Qualium_Challenges">[https://www.qualium-systems.com/blog/what-are-spatial-anchors-and-why-they-matter/ Qualium Systems – Spatial Anchors Challenges]</ref>
Line 100: Line 104:
To enable multi-user experiences, spatial anchors can be shared across devices by using cloud services or networked sessions. A shared anchor serves as a common reference point so that two or more users see virtual content appearing at the same real-world location from their own perspectives. There are several cloud-based anchor services in the industry designed for this purpose.
To enable multi-user experiences, spatial anchors can be shared across devices by using cloud services or networked sessions. A shared anchor serves as a common reference point so that two or more users see virtual content appearing at the same real-world location from their own perspectives. There are several cloud-based anchor services in the industry designed for this purpose.


'''Azure Spatial Anchors''' (ASA) by Microsoft provided a cloud backend to which an application could upload local anchors and later retrieve them on another device (or by another user), enabling collaborative mixed reality across [[HoloLens]], iOS, and Android.<ref name="ASA"/> ASA established a common coordinate frame for shared experiences without requiring any QR codes or prior environmental setup-every device that located the Azure anchor would align its content to the exact same physical spot in the world.<ref name="ASA"/> The service worked by creating a cloud-based representation of an anchor's surrounding environment using feature descriptors (not actual images) that could be accessed by other devices via a unique ID.<ref name="ASA"/>
'''[[Azure Spatial Anchors]]''' (ASA) by Microsoft provided a cloud backend to which an application could upload local anchors and later retrieve them on another device (or by another user), enabling collaborative mixed reality across [[HoloLens]], iOS, and Android.<ref name="ASA"/> ASA established a common coordinate frame for shared experiences without requiring any QR codes or prior environmental setup-every device that located the Azure anchor would align its content to the exact same physical spot in the world.<ref name="ASA"/> The service worked by creating a cloud-based representation of an anchor's surrounding environment using feature descriptors (not actual images) that could be accessed by other devices via a unique ID.<ref name="ASA"/>


Google's [[ARCore]] provides a similar capability with its '''Cloud Anchors''' API (introduced in 2018). Cloud Anchors allow ARCore developers to host anchor data on a Google-managed cloud service, so that anchors (and the attached AR content) can be resolved on different devices and even across Android and iOS.<ref name="GoogleBlog2018">[https://developers.googleblog.com/2020/10/improving-shared-ar-experiences-with-cloud-anchors.html Google Developers Blog – Improving Shared AR Experiences with Cloud Anchors (2020)]</ref><ref name="ARCoreCloud">[https://developers.google.com/ar/develop/java/cloud-anchors/quickstart ARCore Developer Guide – Cloud Anchors Quickstart]</ref>  
Google's [[ARCore]] provides a similar capability with its '''[[Cloud Anchors]]''' API (introduced in 2018). Cloud Anchors allow ARCore developers to host anchor data on a Google-managed cloud service, so that anchors (and the attached AR content) can be resolved on different devices and even across Android and iOS.<ref name="GoogleBlog2018">[https://developers.googleblog.com/2020/10/improving-shared-ar-experiences-with-cloud-anchors.html Google Developers Blog – Improving Shared AR Experiences with Cloud Anchors (2020)]</ref><ref name="ARCoreCloud">[https://developers.google.com/ar/develop/java/cloud-anchors/quickstart ARCore Developer Guide – Cloud Anchors Quickstart]</ref>  


The '''hosting process''' involves a user placing an anchor in their environment, with the ARCore SDK uploading visual data describing the features around the anchor to Google's servers (discarded within 24 hours for privacy). The service processes this data and returns a unique Cloud Anchor ID.<ref name="ARCoreCloud"/> The '''resolving process''' has other users' devices use this ID to query Google's service, which compares the visual features of their current environment with the stored data to find the anchor's original position.<ref name="ARCoreCloud"/>
The '''[[hosting process]]''' involves a user placing an anchor in their environment, with the ARCore SDK uploading visual data describing the features around the anchor to Google's servers (discarded within 24 hours for privacy). The service processes this data and returns a unique Cloud Anchor ID.<ref name="ARCoreCloud"/> The '''[[resolving process]]''' has other users' devices use this ID to query Google's service, which compares the visual features of their current environment with the stored data to find the anchor's original position.<ref name="ARCoreCloud"/>


Initially, Cloud Anchors had a 24-hour time limit (anchors would expire after a day), but in 2020 Google launched '''persistent Cloud Anchors''' which can last much longer (on the order of 1 to 365 days) to support content that users can return to over time.<ref name="GoogleBlog2018"/> Using ARCore's cloud service, multiple people running the same app can place and see shared AR objects in a scene-for example, leaving AR graffiti that others can discover later at the same real-world location.
Initially, Cloud Anchors had a 24-hour time limit (anchors would expire after a day), but in 2020 Google launched '''persistent Cloud Anchors''' which can last much longer (on the order of 1 to 365 days) to support content that users can return to over time.<ref name="GoogleBlog2018"/> Using ARCore's cloud service, multiple people running the same app can place and see shared AR objects in a scene-for example, leaving AR graffiti that others can discover later at the same real-world location.


In 2022, Google expanded this concept with the ARCore '''Geospatial API''', which leverages global mapping data (Street View imagery) to let developers anchor content by latitude and longitude in many cities worldwide.<ref name="ARCoreGeo"/> This effectively creates an '''AR cloud''' of world-anchored content: end-users can point their device at a known location and instantly retrieve virtual content that is tied to that place. Three types of geospatial anchors are supported: '''WGS84 Anchors''' (absolute latitude/longitude/altitude coordinates), '''Terrain Anchors''' (latitude/longitude with altitude relative to ground determined by VPS), and '''Rooftop Anchors''' (latitude/longitude with altitude relative to building rooftops).<ref name="ARCoreGeo"/>
In 2022, Google expanded this concept with the ARCore '''[[Geospatial API]]''', which leverages global mapping data (Street View imagery) to let developers anchor content by latitude and longitude in many cities worldwide.<ref name="ARCoreGeo"/> This effectively creates an '''[[AR cloud]]''' of world-anchored content: end-users can point their device at a known location and instantly retrieve virtual content that is tied to that place. Three types of geospatial anchors are supported: '''[[WGS84 Anchors]]''' (absolute latitude/longitude/altitude coordinates), '''[[Terrain Anchors]]''' (latitude/longitude with altitude relative to ground determined by VPS), and '''[[Rooftop Anchors]]''' (latitude/longitude with altitude relative to building rooftops).<ref name="ARCoreGeo"/>


=== Platform-specific sharing capabilities ===
=== Platform-specific sharing capabilities ===


Other companies have their own spatial anchor solutions. [[Meta]]'s VR/AR platform (for devices like the Meta Quest) supports spatial anchors that can be saved locally on the headset to persist virtual objects in the user's physical space, and it offers '''Shared Spatial Anchors''' for local multi-user experiences (allowing people in the same room to see each other's anchored content).<ref name="MetaAnchors"/> Sharing anchors on the Quest platform requires a third-party networking solution, such as Photon, to handle the transmission of anchor data between users.<ref name="Meta_SharedAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-shared-spatial-anchors/ Meta for Developers – Shared Spatial Anchors]</ref>
Other companies have their own spatial anchor solutions. [[Meta]]'s VR/AR platform (for devices like the Meta Quest) supports spatial anchors that can be saved locally on the headset to persist virtual objects in the user's physical space, and it offers '''[[Shared Spatial Anchors]]''' for local multi-user experiences (allowing people in the same room to see each other's anchored content).<ref name="MetaAnchors"/> Sharing anchors on the Quest platform requires a third-party networking solution, such as Photon, to handle the transmission of anchor data between users.<ref name="Meta_SharedAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-shared-spatial-anchors/ Meta for Developers – Shared Spatial Anchors]</ref>


[[Magic Leap]] 2 supports spatial anchors through its '''Spaces''' feature: users map an environment (creating a "Space") and can place anchors within that space which remain persistent between sessions on that device.<ref name="MagicLeap"/> The Magic Leap 2 can store several localized spaces (each containing anchors), though cloud sharing of those anchors was part of the now-deprecated Magic Leap "AR Cloud" platform.
[[Magic Leap]] 2 supports spatial anchors through its '''[[Spaces]]''' feature: users map an environment (creating a "Space") and can place anchors within that space which remain persistent between sessions on that device.<ref name="MagicLeap"/> The Magic Leap 2 can store several localized spaces (each containing anchors), though cloud sharing of those anchors was part of the now-deprecated Magic Leap "AR Cloud" platform.


Niantic's [[Lightship]] platform uses a Visual Positioning System (VPS) to allow persistent '''location-based anchors''': developers can place anchors at specific real-world locations (such as a public landmark), and any user who comes to that location with a VPS-enabled app can discover and display the anchored content there.<ref name="NianticVPS"/> Niantic's Lightship VPS provides centimeter-level accuracy for AR device localization with over 1 million VPS-enabled locations worldwide.<ref name="Niantic_Enterprise">[https://www.nianticspatial.com/blog/spatial-anchors-enterprise-readiness Niantic Spatial – Spatial Anchors Enterprise Readiness (2025)]</ref>
Niantic's [[Lightship]] platform uses a Visual Positioning System (VPS) to allow persistent '''[[location-based anchors]]''': developers can place anchors at specific real-world locations (such as a public landmark), and any user who comes to that location with a VPS-enabled app can discover and display the anchored content there.<ref name="NianticVPS"/> Niantic's Lightship VPS provides centimeter-level accuracy for AR device localization with over 1 million VPS-enabled locations worldwide.<ref name="Niantic_Enterprise">[https://www.nianticspatial.com/blog/spatial-anchors-enterprise-readiness Niantic Spatial – Spatial Anchors Enterprise Readiness (2025)]</ref>


== Support in major AR frameworks ==
== Support in major AR frameworks ==
Line 124: Line 128:
=== Apple ARKit ===
=== Apple ARKit ===


ARKit represents anchors with the `ARAnchor` class, which Apple defines as "a real-world position and orientation that can be used for placing objects in an AR scene."<ref name="ARKitAnchor"/> ARKit provides '''10+ specific anchor types''', all inheriting from the base ARAnchor class: '''ARAnchor''' (base class for position and orientation), '''ARPlaneAnchor''' (horizontal and vertical surfaces with semantic classification), '''ARImageAnchor''' (tracked images with scale estimation), '''ARObjectAnchor''' (real-world 3D objects), '''ARBodyAnchor''' (human body positions), '''ARFaceAnchor''' (facial tracking), '''ARGeoAnchor''' (geographic locations using GPS and visual positioning, ARKit 4.0+), '''ARMeshAnchor''' (polygonal mesh geometry using LiDAR, ARKit 3.5+), '''ARParticipantAnchor''' (multi-user AR experiences), '''AREnvironmentProbeAnchor''' (environmental lighting), and '''ARAppClipCodeAnchor''' (App Clip Code tracking, ARKit 4.0+).<ref name="ARKitDocs">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer – ARAnchor Documentation]</ref>
[[ARKit]] represents anchors with the `ARAnchor` class, which Apple defines as "a real-world position and orientation that can be used for placing objects in an AR scene."<ref name="ARKitAnchor"/> ARKit provides '''10+ specific anchor types''', all inheriting from the base ARAnchor class: '''ARAnchor''' (base class for position and orientation), '''ARPlaneAnchor''' (horizontal and vertical surfaces with semantic classification), '''ARImageAnchor''' (tracked images with scale estimation), '''ARObjectAnchor''' (real-world 3D objects), '''ARBodyAnchor''' (human body positions), '''ARFaceAnchor''' (facial tracking), '''ARGeoAnchor''' (geographic locations using GPS and visual positioning, ARKit 4.0+), '''ARMeshAnchor''' (polygonal mesh geometry using LiDAR, ARKit 3.5+), '''ARParticipantAnchor''' (multi-user AR experiences), '''AREnvironmentProbeAnchor''' (environmental lighting), and '''ARAppClipCodeAnchor''' (App Clip Code tracking, ARKit 4.0+).<ref name="ARKitDocs">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer – ARAnchor Documentation]</ref>


ARKit automatically generates some anchors (for example, when planar surfaces or images are detected, it creates plane anchors or image anchors), and developers can add their own anchors at arbitrary positions. ARKit does not have a built-in cloud anchor service, but it allows multi-user sharing by merging AR sessions: an app can share a map (which contains anchors) with another device to synchronize their coordinate space. In ARKit 4, Apple introduced '''Location Anchors (ARGeoAnchor)''', which use high-resolution Apple Maps data in certain cities to anchor content to real-world coordinates (latitude, longitude, altitude)-enabling experiences where AR content is tied to specific landmarks or locations.<ref name="AppleLocation">[https://www.apple.com/newsroom/2020/06/ios-14-offers-new-features/ Apple Newsroom – ARKit 4 Location Anchors (2020)]</ref>
ARKit automatically generates some anchors (for example, when planar surfaces or images are detected, it creates plane anchors or image anchors), and developers can add their own anchors at arbitrary positions. ARKit does not have a built-in cloud anchor service, but it allows multi-user sharing by merging AR sessions: an app can share a map (which contains anchors) with another device to synchronize their coordinate space. In ARKit 4, Apple introduced '''Location Anchors (ARGeoAnchor)''', which use high-resolution Apple Maps data in certain cities to anchor content to real-world coordinates (latitude, longitude, altitude)-enabling experiences where AR content is tied to specific landmarks or locations.<ref name="AppleLocation">[https://www.apple.com/newsroom/2020/06/ios-14-offers-new-features/ Apple Newsroom – ARKit 4 Location Anchors (2020)]</ref>
Line 132: Line 136:
=== Google ARCore ===
=== Google ARCore ===


ARCore provides an `Anchor` class in its API for locking a virtual object's position. ARCore anchors are often created after a '''hit test''' (raycast) against the environment or attached to detected plane surfaces. ARCore provides four anchor types: '''Local Anchors''' (stored locally, valid for single app instance), '''Cloud Anchors''' (hosted in Google Cloud, shareable between devices/users), '''Persistent Cloud Anchors''' (ARCore 1.20+, configurable lifetime from 1 to 365 days), and '''Geospatial Anchors''' (based on GPS coordinates plus Visual Positioning System).<ref name="ARCoreAnchors">[https://developers.google.com/ar/develop/anchors Google ARCore – Working with Anchors]</ref>
[[ARCore]] provides an `Anchor` class in its API for locking a virtual object's position. ARCore anchors are often created after a '''hit test''' (raycast) against the environment or attached to detected plane surfaces. ARCore provides four anchor types: '''Local Anchors''' (stored locally, valid for single app instance), '''Cloud Anchors''' (hosted in Google Cloud, shareable between devices/users), '''Persistent Cloud Anchors''' (ARCore 1.20+, configurable lifetime from 1 to 365 days), and '''Geospatial Anchors''' (based on GPS coordinates plus Visual Positioning System).<ref name="ARCoreAnchors">[https://developers.google.com/ar/develop/anchors Google ARCore – Working with Anchors]</ref>


In addition to local anchors, ARCore's Cloud Anchors (and persistent Cloud Anchors) enable saving and sharing anchors via Google's cloud. ARCore also offers geospatial anchors through the Geospatial API, using global VPS; developers can create a WGS84 anchor by specifying a latitude, longitude, and altitude, and ARCore will align it using localization from satellite imagery and Street View data.<ref name="ARCoreGeo"/>
In addition to local anchors, ARCore's Cloud Anchors (and persistent Cloud Anchors) enable saving and sharing anchors via Google's cloud. ARCore also offers geospatial anchors through the Geospatial API, using global VPS; developers can create a WGS84 anchor by specifying a latitude, longitude, and altitude, and ARCore will align it using localization from satellite imagery and Street View data.<ref name="ARCoreGeo"/>
Line 140: Line 144:
=== Microsoft Windows Mixed Reality ===
=== Microsoft Windows Mixed Reality ===


On HoloLens and other WMR devices, spatial anchors (sometimes called ''world anchors'') are used to keep holograms fixed in place. The device continuously refines the anchors using its spatial mapping and tracking system. Microsoft provided the '''Azure Spatial Anchors''' cloud service for cross-platform persistence: developers could create an anchor on a HoloLens (as a `CloudSpatialAnchor` in the SDK), upload it to Azure, then locate that same anchor on an iPhone or Android device, allowing a shared experience.<ref name="ASA"/>
On [[HoloLens]] and other WMR devices, spatial anchors (sometimes called ''world anchors'') are used to keep holograms fixed in place. The device continuously refines the anchors using its spatial mapping and tracking system. Microsoft provided the '''Azure Spatial Anchors''' cloud service for cross-platform persistence: developers could create an anchor on a HoloLens (as a `CloudSpatialAnchor` in the SDK), upload it to Azure, then locate that same anchor on an iPhone or Android device, allowing a shared experience.<ref name="ASA"/>


Locally, HoloLens also allowed saving anchors to disk and exporting/importing them (via a WorldAnchorTransferBatch in Unity) to persist holograms between app runs.<ref name="MSLearn3"/> The HoloLens system has a limit on how far content can be from an anchor before stability decreases-Microsoft recommends keeping holograms within a few meters of their anchor for best accuracy, specifically noting that holograms greater than 3 meters from anchor origin experience noticeable positional errors.<ref name="MSLearn2"/>
Locally, HoloLens also allowed saving anchors to disk and exporting/importing them (via a WorldAnchorTransferBatch in Unity) to persist holograms between app runs.<ref name="MSLearn3"/> The HoloLens system has a limit on how far content can be from an anchor before stability decreases-Microsoft recommends keeping holograms within a few meters of their anchor for best accuracy, specifically noting that holograms greater than 3 meters from anchor origin experience noticeable positional errors.<ref name="MSLearn2"/>
Line 148: Line 152:
=== Meta (Oculus) Quest ===
=== Meta (Oculus) Quest ===


Meta's mixed reality SDK supports spatial anchors for its VR/AR headsets. Developers using the Meta Oculus SDK or OpenXR on Quest devices can create anchors (for example with the `OVRSpatialAnchor` in Unity) to persist virtual objects in a room. These anchors can be saved to the device's storage so that content remains in place across sessions (Quest headsets can store a certain number of anchors in their local memory).<ref name="MetaAnchors"/>
[[Meta]]'s mixed reality SDK supports spatial anchors for its VR/AR headsets. Developers using the Meta Oculus SDK or OpenXR on Quest devices can create anchors (for example with the `OVRSpatialAnchor` in Unity) to persist virtual objects in a room. These anchors can be saved to the device's storage so that content remains in place across sessions (Quest headsets can store a certain number of anchors in their local memory).<ref name="MetaAnchors"/>


Meta provides a '''Shared Spatial Anchors''' feature that lets devices in proximity share anchor data with each other for co-located multiplayer experiences (two users in the same room see the same virtual object anchored on a table). In the Meta framework, an anchor is described as a "world-locked frame of reference" for content, underscoring that it maintains its position in the real world rather than moving with the user.<ref name="MetaAnchors"/> Meta supports two sharing models: an older user-based model that requires Oculus User IDs and a newer, recommended group-based model that uses a shared UUID to simplify the sharing process.<ref name="Meta_SharedAnchors"/>
Meta provides a '''[[Shared Spatial Anchors]]''' feature that lets devices in proximity share anchor data with each other for co-located multiplayer experiences (two users in the same room see the same virtual object anchored on a table). In the Meta framework, an anchor is described as a "world-locked frame of reference" for content, underscoring that it maintains its position in the real world rather than moving with the user.<ref name="MetaAnchors"/> Meta supports two sharing models: an older user-based model that requires Oculus User IDs and a newer, recommended group-based model that uses a shared UUID to simplify the sharing process.<ref name="Meta_SharedAnchors"/>


The `OVRSpatialAnchor` component in Unity enables creating anchors at specific positions, saving anchors locally or to Meta servers, loading previously saved anchors, erasing anchors when no longer needed, and querying for nearby anchors. Configuration requires Anchor Support enabled, Shared Spatial Anchor Support set to Supported, and Enhanced Spatial Services enabled in device settings.<ref name="MetaAnchors"/>
The `OVRSpatialAnchor` component in Unity enables creating anchors at specific positions, saving anchors locally or to Meta servers, loading previously saved anchors, erasing anchors when no longer needed, and querying for nearby anchors. Configuration requires Anchor Support enabled, Shared Spatial Anchor Support set to Supported, and Enhanced Spatial Services enabled in device settings.<ref name="MetaAnchors"/>
Line 156: Line 160:
=== Magic Leap ===
=== Magic Leap ===


Magic Leap 1 and 2 devices use spatial anchors as part of their Space mapping. A Magic Leap user scans an environment to establish a Space (a map of the area); within that space, anchors can be placed to mark where virtual content should appear. Magic Leap 2 supports up to 5 local Spaces stored on the device, allowing a user to save multiple environments with their anchors.<ref name="MagicLeap"/>
[[Magic Leap]] 1 and 2 devices use spatial anchors as part of their Space mapping. A Magic Leap user scans an environment to establish a Space (a map of the area); within that space, anchors can be placed to mark where virtual content should appear. Magic Leap 2 supports up to 5 local Spaces stored on the device, allowing a user to save multiple environments with their anchors.<ref name="MagicLeap"/>


When a user returns to a mapped Space, the device can localize itself and restore all anchors (and content) in that space. The platform uses a concept called "Spaces," which are persistent, scanned 3D maps of an environment. Within these Spaces, developers can create spatial anchors that can be stored either locally on the device or in the AR Cloud.<ref name="MagicLeapSpaces">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Developer Docs – Spaces and Spatial Anchors]</ref> While Magic Leap's earlier cloud service for sharing anchors (Cloud Anchor / AR Cloud) was phased out, developers can integrate third-party services (or use Magic Leap's local networking) for multi-user scenarios if needed.
When a user returns to a mapped Space, the device can localize itself and restore all anchors (and content) in that space. The platform uses a concept called "Spaces," which are persistent, scanned 3D maps of an environment. Within these Spaces, developers can create spatial anchors that can be stored either locally on the device or in the AR Cloud.<ref name="MagicLeapSpaces">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Developer Docs – Spaces and Spatial Anchors]</ref> While Magic Leap's earlier cloud service for sharing anchors (Cloud Anchor / AR Cloud) was phased out, developers can integrate third-party services (or use Magic Leap's local networking) for multi-user scenarios if needed.
Line 162: Line 166:
=== Unity AR Foundation ===
=== Unity AR Foundation ===


Unity AR Foundation provides a '''cross-platform abstraction''' for spatial anchors across ARKit (iOS), ARCore (Android), Magic Leap, Meta Quest, and HoloLens (via OpenXR). The `ARAnchor` component can be added via `AddComponent<ARAnchor>()` on any GameObject, with transforms automatically updated by AR Foundation.<ref name="UnityAnchor"/> Manual transform changes are not recommended as anchors enter a pending state before full tracking is established, queryable via the `ARAnchor.pending` property.
[[Unity AR Foundation]] provides a '''cross-platform abstraction''' for spatial anchors across ARKit (iOS), ARCore (Android), Magic Leap, Meta Quest, and HoloLens (via OpenXR). The `ARAnchor` component can be added via `AddComponent<ARAnchor>()` on any GameObject, with transforms automatically updated by AR Foundation.<ref name="UnityAnchor"/> Manual transform changes are not recommended as anchors enter a pending state before full tracking is established, queryable via the `ARAnchor.pending` property.


The `ARAnchorManager` manages the lifecycle of all anchors in the scene, providing an `anchorsChanged` event reporting added, updated, and removed anchors. The `AttachAnchor(ARPlane plane, Pose pose)` method attaches anchors to detected planes. The system translates between Unity world space and AR session space, compensating for tracking loss and session reinitialization.<ref name="UnityAnchor"/>
The `ARAnchorManager` manages the lifecycle of all anchors in the scene, providing an `anchorsChanged` event reporting added, updated, and removed anchors. The `AttachAnchor(ARPlane plane, Pose pose)` method attaches anchors to detected planes. The system translates between Unity world space and AR session space, compensating for tracking loss and session reinitialization.<ref name="UnityAnchor"/>
Line 219: Line 223:
=== Gaming and entertainment ===
=== Gaming and entertainment ===


'''Multiplayer gaming''' uses spatial anchors for co-located multiplayer games, where players in the same physical room can interact with a shared virtual world. In titles like '''Demeo''', a virtual game board is anchored to a physical table, allowing multiple players to see and interact with the same game state from their unique perspectives.<ref name="MetaDesignAnchors"/> Other notable examples include the robot battle game '''BAM''' and the AR dueling game '''Saber City'''.<ref name="Reddit_Apps">[https://www.reddit.com/r/OculusQuest/comments/17fr7nc/apps_and_games_with_shared_spatial_anchors/ Reddit – Apps and Games with Shared Spatial Anchors]</ref>
'''Multiplayer gaming''' uses spatial anchors for co-located multiplayer games, where players in the same physical room can interact with a shared virtual world. In titles like '''[[Demeo]]''', a virtual game board is anchored to a physical table, allowing multiple players to see and interact with the same game state from their unique perspectives.<ref name="MetaDesignAnchors"/> Other notable examples include the robot battle game '''[[BAM]]''' and the AR dueling game '''[[Saber City]]'''.<ref name="Reddit_Apps">[https://www.reddit.com/r/OculusQuest/comments/17fr7nc/apps_and_games_with_shared_spatial_anchors/ Reddit – Apps and Games with Shared Spatial Anchors]</ref>


'''Pokemon Go''' by Niantic uses AR+ Mode with ARCore/ARKit to anchor Pokemon to physical locations with centimeter accuracy through their Visual Positioning System. The '''Pokemon Playgrounds''' feature enables shared persistent AR experiences at PokéStops and Gyms where trainers can place Pokemon for others to discover. With 91+ million active players and 176+ million copies sold, Pokemon Go demonstrated the viability of world-scale AR gaming.<ref name="PokemonPlaygrounds">[https://nianticlabs.com/news/pokemon-playgrounds Niantic Labs – Pokemon Playgrounds Announcement]</ref>
'''[[Pokemon Go]]''' by Niantic uses AR+ Mode with ARCore/ARKit to anchor Pokemon to physical locations with centimeter accuracy through their Visual Positioning System. The '''Pokemon Playgrounds''' feature enables shared persistent AR experiences at PokéStops and Gyms where trainers can place Pokemon for others to discover. With 91+ million active players and 176+ million copies sold, Pokemon Go demonstrated the viability of world-scale AR gaming.<ref name="PokemonPlaygrounds">[https://nianticlabs.com/news/pokemon-playgrounds Niantic Labs – Pokemon Playgrounds Announcement]</ref>


'''Minecraft Earth''' utilized Azure Spatial Anchors and PlayFab integration to create life-size AR experiences, allowing players to build and share persistent structures in the real world. The game featured "Adventures"-small slices of Minecraft worlds rendered in life-size AR on sidewalks and parks. Though later retired, it demonstrated the technical feasibility of large-scale collaborative AR gaming.<ref name="MinecraftEarth">[https://news.microsoft.com/features/minecraft-earth-azure-spatial-anchors/ Microsoft News – Minecraft Earth Technology]</ref>
'''[[Minecraft Earth]]''' utilized Azure Spatial Anchors and PlayFab integration to create life-size AR experiences, allowing players to build and share persistent structures in the real world. The game featured "Adventures"-small slices of Minecraft worlds rendered in life-size AR on sidewalks and parks. Though later retired, it demonstrated the technical feasibility of large-scale collaborative AR gaming.<ref name="MinecraftEarth">[https://news.microsoft.com/features/minecraft-earth-azure-spatial-anchors/ Microsoft News – Minecraft Earth Technology]</ref>


=== Collaborative design and productivity ===
=== Collaborative design and productivity ===


Applications such as '''Spatial''' and '''Arkio''' leverage shared anchors to create a common virtual space where multiple users can co-create, manipulate, and review 3D models and designs in real-time. This effectively turns any room into a collaborative digital studio, enhancing creative and professional workflows.<ref name="Reddit_Apps"/> '''ShapesXR''' implements real-time co-building with shadows and copy-paste functionality between spaces using shared spatial anchors.<ref name="ShapesXR">[https://www.shapesxr.com/post/update-shadows-shared-spatial-anchors-copy-and-paste-between-spaces ShapesXR – Shared Spatial Anchors Update]</ref>
Applications such as '''[[Spatial]]''' and '''[[Arkio]]''' leverage shared anchors to create a common virtual space where multiple users can co-create, manipulate, and review 3D models and designs in real-time. This effectively turns any room into a collaborative digital studio, enhancing creative and professional workflows.<ref name="Reddit_Apps"/> '''[[ShapesXR]]''' implements real-time co-building with shadows and copy-paste functionality between spaces using shared spatial anchors.<ref name="ShapesXR">[https://www.shapesxr.com/post/update-shadows-shared-spatial-anchors-copy-and-paste-between-spaces ShapesXR – Shared Spatial Anchors Update]</ref>


=== Retail and commerce ===
=== Retail and commerce ===


'''IKEA Place''' launched in September 2017 as one of the first major ARKit implementations, featuring 2,000+ true-to-scale 3D furniture models with 98% accuracy. The app uses spatial anchors to place virtual furniture persistently in homes with updated features including multi-placement (place multiple items simultaneously), room sets (experience entire rooms with handpicked furniture), visual search (point camera at furniture to find similar IKEA products), and wishlist and sharing capabilities.<ref name="IKEAPlace">[https://www.ikea.com/global/en/newsroom/innovation/ikea-launches-ikea-place IKEA – IKEA Place Launch Announcement]</ref>
'''[[IKEA Place]]''' launched in September 2017 as one of the first major ARKit implementations, featuring 2,000+ true-to-scale 3D furniture models with 98% accuracy. The app uses spatial anchors to place virtual furniture persistently in homes with updated features including multi-placement (place multiple items simultaneously), room sets (experience entire rooms with handpicked furniture), visual search (point camera at furniture to find similar IKEA products), and wishlist and sharing capabilities.<ref name="IKEAPlace">[https://www.ikea.com/global/en/newsroom/innovation/ikea-launches-ikea-place IKEA – IKEA Place Launch Announcement]</ref>


Built in just seven weeks, IKEA Place uses 3D renders from existing catalogs. Michael Valdsgaard (Digital Transformation Leader) commented: "Augmented reality and virtual reality will be a total game changer for retail in the same way as the internet. Only this time, much faster." The application demonstrates how spatial anchors enable consumers to visualize products in their actual spaces before purchase, reducing return rates and improving purchase confidence.<ref name="IKEAPlace"/>
Built in just seven weeks, IKEA Place uses 3D renders from existing catalogs. Michael Valdsgaard (Digital Transformation Leader) commented: "Augmented reality and virtual reality will be a total game changer for retail in the same way as the internet. Only this time, much faster." The application demonstrates how spatial anchors enable consumers to visualize products in their actual spaces before purchase, reducing return rates and improving purchase confidence.<ref name="IKEAPlace"/>
Line 239: Line 243:
=== Industrial and enterprise ===
=== Industrial and enterprise ===


'''Remote assistance and maintenance:''' In industrial settings, an on-site technician wearing an AR headset can share their view with a remote expert. The expert can then place spatially anchored instructions, diagrams, or annotations directly onto the real-world machinery. '''ThyssenKrupp Elevator Service''' uses HoloLens with Azure Spatial Anchors, enabling remote experts to mark up machinery with virtual annotations visible to on-site technicians. This implementation reduced maintenance time by approximately 30%. Technicians see instructions anchored to specific machine parts, reducing errors and improving first-time fix rates.<ref name="Qualium_UseCases"/>
'''[[Remote assistance and maintenance]]:''' In industrial settings, an on-site technician wearing an AR headset can share their view with a remote expert. The expert can then place spatially anchored instructions, diagrams, or annotations directly onto the real-world machinery. '''ThyssenKrupp Elevator Service''' uses HoloLens with Azure Spatial Anchors, enabling remote experts to mark up machinery with virtual annotations visible to on-site technicians. This implementation reduced maintenance time by approximately 30%. Technicians see instructions anchored to specific machine parts, reducing errors and improving first-time fix rates.<ref name="Qualium_UseCases"/>


'''Warehouse logistics:''' '''Honeywell Connected Plant''' projects virtual arrows onto warehouse floors for optimized picking paths, improving order picking speed by approximately 25%. The persistence of spatial anchors ensures arrows remain accurate across shifts, and the solution has been deployed across multiple warehouse locations demonstrating scalability.<ref name="Qualium_UseCases"/>
'''[[Warehouse logistics]]:''' '''[[Honeywell Connected Plant]]''' projects virtual arrows onto warehouse floors for optimized picking paths, improving order picking speed by approximately 25%. The persistence of spatial anchors ensures arrows remain accurate across shifts, and the solution has been deployed across multiple warehouse locations demonstrating scalability.<ref name="Qualium_UseCases"/>


'''Worker training:''' Complex procedures can be taught more effectively by anchoring step-by-step holographic instructions to specific parts of a machine or workspace. This allows trainees to learn in a hands-on, contextually relevant manner without risk to live equipment. Factory floor visualization enables workers to visualize machine status, navigate facilities, and access real-time IoT data overlaid on equipment.<ref name="Qualium_UseCases"/>
'''[[Worker training]]:''' Complex procedures can be taught more effectively by anchoring step-by-step holographic instructions to specific parts of a machine or workspace. This allows trainees to learn in a hands-on, contextually relevant manner without risk to live equipment. Factory floor visualization enables workers to visualize machine status, navigate facilities, and access real-time IoT data overlaid on equipment.<ref name="Qualium_UseCases"/>


=== Healthcare and medical education ===
=== Healthcare and medical education ===


'''Pearson Education''' uses nursing students and professors practicing diagnosing and treating virtual patients in 3D real-world settings using HoloLens and mobile devices with Azure Spatial Anchors cross-platform support. Jeff Mlakar from Case Western Reserve University stated: "We can reach more students, educators and families by uniting our experiences across mobile and HoloLens devices...With Spatial Anchors' cross-platform support, we can bring our curriculum to life in 3D and share it with everyone."<ref name="CaseWestern">[https://news.microsoft.com/transform/case-western-reserve-pearson-hololens-spatial-anchors/ Microsoft News – Case Western Reserve and Pearson Education]</ref>
'''[[Pearson Education]]''' uses nursing students and professors practicing diagnosing and treating virtual patients in 3D real-world settings using HoloLens and mobile devices with Azure Spatial Anchors cross-platform support. Jeff Mlakar from Case Western Reserve University stated: "We can reach more students, educators and families by uniting our experiences across mobile and HoloLens devices...With Spatial Anchors' cross-platform support, we can bring our curriculum to life in 3D and share it with everyone."<ref name="CaseWestern">[https://news.microsoft.com/transform/case-western-reserve-pearson-hololens-spatial-anchors/ Microsoft News – Case Western Reserve and Pearson Education]</ref>


VR platforms convert MRI/CT DICOM stacks into interactive 3D reconstructions for surgical planning, enabling pre-surgical rehearsals and multi-disciplinary team reviews. Surgical AR navigation provides intraoperative decision support with metric-accurate volumetric models and AI-driven segmentation for precise tool guidance. Medical training platforms use virtual reality training for emergency medical procedures, with spatial anchors enabling consistent placement of training scenarios.<ref name="Qualium_UseCases"/>
VR platforms convert MRI/CT DICOM stacks into interactive 3D reconstructions for surgical planning, enabling pre-surgical rehearsals and multi-disciplinary team reviews. Surgical AR navigation provides intraoperative decision support with metric-accurate volumetric models and AI-driven segmentation for precise tool guidance. Medical training platforms use virtual reality training for emergency medical procedures, with spatial anchors enabling consistent placement of training scenarios.<ref name="Qualium_UseCases"/>
Line 253: Line 257:
=== Architecture and construction ===
=== Architecture and construction ===


Architecture and construction firms use spatial anchors for '''design review''' where architects and site workers review building plans overlaid on construction sites. Spatial planning enables visualization of proposed structures in real-world context, while progress tracking compares planned versus actual construction with persistent anchors marking key reference points.<ref name="RecreateFAQ"/>
Architecture and construction firms use spatial anchors for '''[[design review]]''' where architects and site workers review building plans overlaid on construction sites. Spatial planning enables visualization of proposed structures in real-world context, while progress tracking compares planned versus actual construction with persistent anchors marking key reference points.<ref name="RecreateFAQ"/>


Theatre set design uses Azure Object Anchors to identify objects (couches, props) and Azure Spatial Anchors to map stage locations for multi-scene prop placement. Museums and exhibits implement interactive exhibits with persistent holographic content, and smart city infrastructure deploys persistent AR overlays for navigation, information displays, and public services.<ref name="Qualium_UseCases"/>
Theatre set design uses Azure Object Anchors to identify objects (couches, props) and Azure Spatial Anchors to map stage locations for multi-scene prop placement. Museums and exhibits implement interactive exhibits with persistent holographic content, and smart city infrastructure deploys persistent AR overlays for navigation, information displays, and public services.<ref name="Qualium_UseCases"/>
Line 261: Line 265:
In education, spatial anchors enable persistent educational content across classrooms and campuses with 3D curriculum visualization. Students explore complex subjects through 3D visualizations anchored to physical spaces, and multiple students can work on shared holographic content simultaneously in collaborative projects.<ref name="RecreateFAQ"/>
In education, spatial anchors enable persistent educational content across classrooms and campuses with 3D curriculum visualization. Students explore complex subjects through 3D visualizations anchored to physical spaces, and multiple students can work on shared holographic content simultaneously in collaborative projects.<ref name="RecreateFAQ"/>


'''Indoor navigation:''' In large, complex venues such as airports, museums, or train stations, where GPS is unreliable, spatial anchors can be used to create persistent, turn-by-turn AR navigation paths. These paths can guide visitors directly to their gate, exhibit, or platform, enhancing the visitor experience.<ref name="RecreateFAQ"/>
'''[[Indoor navigation]]:''' In large, complex venues such as airports, museums, or train stations, where GPS is unreliable, spatial anchors can be used to create persistent, turn-by-turn AR navigation paths. These paths can guide visitors directly to their gate, exhibit, or platform, enhancing the visitor experience.<ref name="RecreateFAQ"/>


'''Interactive museum exhibits:''' Museums can use spatial anchors to overlay historical information, 3D reconstructions of artifacts, or interactive animations directly onto their physical displays. This provides visitors with a richer, more engaging, and contextually layered educational experience.<ref name="RecreateFAQ"/>
'''[[Interactive museum exhibits]]:''' Museums can use spatial anchors to overlay historical information, 3D reconstructions of artifacts, or interactive animations directly onto their physical displays. This provides visitors with a richer, more engaging, and contextually layered educational experience.<ref name="RecreateFAQ"/>


== Technical challenges and limitations ==
== Technical challenges and limitations ==
Line 271: Line 275:
=== Environmental constraints ===
=== Environmental constraints ===


'''Feature-scarce environments''' present the primary challenge-empty white walls, uniform floors, and large glass areas lack visual features for anchor creation. Anchors fail to create or match reliably in these conditions. The practical impact is severe in modern office buildings with minimalist design.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations">[https://xreal.gitbook.io/nrsdk/development/spatial-anchor XREAL Developer Docs – Spatial Anchor Limitations]</ref>
'''[[Feature-scarce environments]]''' present the primary challenge-empty white walls, uniform floors, and large glass areas lack visual features for anchor creation. Anchors fail to create or match reliably in these conditions. The practical impact is severe in modern office buildings with minimalist design.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations">[https://xreal.gitbook.io/nrsdk/development/spatial-anchor XREAL Developer Docs – Spatial Anchor Limitations]</ref>


'''Lighting conditions''' cause disruptions when abrupt changes occur (lights turning off/on, moving between dark and bright areas). Anchors may "jump" or temporarily disappear during adjustment. Documentation recommends even lighting and avoiding dramatic lighting changes for optimal anchor stability.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations"/>
'''[[Lighting conditions]]''' cause disruptions when abrupt changes occur (lights turning off/on, moving between dark and bright areas). Anchors may "jump" or temporarily disappear during adjustment. Documentation recommends even lighting and avoiding dramatic lighting changes for optimal anchor stability.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations"/>


'''Dynamic environments''' with moving objects (people, equipment) occlude reference features, causing tracking issues and anchor instability. This is particularly problematic in crowded spaces or busy warehouses where the environment constantly changes. '''Surface requirements''' exclude transparent, semi-transparent, or reflective surfaces-mirrors, glass, and glossy surfaces prevent the system from detecting and tracking features effectively.<ref name="Qualium_Challenges"/>
'''[[Dynamic environments]]''' with moving objects (people, equipment) occlude reference features, causing tracking issues and anchor instability. This is particularly problematic in crowded spaces or busy warehouses where the environment constantly changes. '''Surface requirements''' exclude transparent, semi-transparent, or reflective surfaces-mirrors, glass, and glossy surfaces prevent the system from detecting and tracking features effectively.<ref name="Qualium_Challenges"/>


=== Drift and accuracy issues ===
=== Drift and accuracy issues ===


'''Scale drift''' occurs as small tracking errors accumulate over time, causing "drift" where virtual objects slowly diverge from intended positions. Drift becomes noticeable at greater than 0.2 meters deviation. Mitigation strategies include regular anchor updates, recalibration, and creating fresh anchors when drift exceeds acceptable thresholds.<ref name="Qualium_Challenges"/><ref name="MagicLeapDrift">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Docs – Anchor Drift Mitigation]</ref>
'''[[Scale drift]]''' occurs as small tracking errors accumulate over time, causing "drift" where virtual objects slowly diverge from intended positions. Drift becomes noticeable at greater than 0.2 meters deviation. Mitigation strategies include regular anchor updates, recalibration, and creating fresh anchors when drift exceeds acceptable thresholds.<ref name="Qualium_Challenges"/><ref name="MagicLeapDrift">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Docs – Anchor Drift Mitigation]</ref>


Sharing anchors between devices can introduce additional alignment errors, where each user sees the virtual content in a slightly different position-displacement can be subtle (4–5 cm) but often significant enough (up to 20 cm in some cases) to break the illusion of stability, especially for applications requiring high precision.<ref name="Meta_Drift">[https://communityforums.atmeta.com/discussions/dev-unity/spatial-anchors-issues Meta Community Forums – Spatial Anchors Issues]</ref>
Sharing anchors between devices can introduce additional alignment errors, where each user sees the virtual content in a slightly different position-displacement can be subtle (4–5 cm) but often significant enough (up to 20 cm in some cases) to break the illusion of stability, especially for applications requiring high precision.<ref name="Meta_Drift">[https://communityforums.atmeta.com/discussions/dev-unity/spatial-anchors-issues Meta Community Forums – Spatial Anchors Issues]</ref>


'''Latency issues''' in anchor stabilization affect user experience-matching saved data to real-time visuals should occur in under 5 seconds ideally. Poor performance leads to user frustration and abandonment. Platform variance shows ARKit is typically faster than cloud anchors (ARCore/Azure Spatial Anchors) for initial anchor establishment.<ref name="Qualium_Challenges"/>
'''[[Latency issues]]''' in anchor stabilization affect user experience-matching saved data to real-time visuals should occur in under 5 seconds ideally. Poor performance leads to user frustration and abandonment. Platform variance shows ARKit is typically faster than cloud anchors (ARCore/Azure Spatial Anchors) for initial anchor establishment.<ref name="Qualium_Challenges"/>


=== Scalability and performance ===
=== Scalability and performance ===
Line 309: Line 313:
=== Best practices for implementation ===
=== Best practices for implementation ===


'''Environment scanning''' requires moving device slowly and sweeping camera across all surfaces, with scan duration of 5-15 seconds recommended. Developers should capture textures, furniture, and paintings for better feature detection while avoiding rapid viewpoint changes or head movements.<ref name="ARCoreCloud"/>
'''[[Environment scanning]]''' requires moving device slowly and sweeping camera across all surfaces, with scan duration of 5-15 seconds recommended. Developers should capture textures, furniture, and paintings for better feature detection while avoiding rapid viewpoint changes or head movements.<ref name="ARCoreCloud"/>


'''Anchor density management''' includes creating mesh hierarchy with main anchor plus secondary anchors for detail, avoiding overloading small areas with too many anchors, deleting unused anchors to stay within quotas and reduce locate times, and considering anchor lifecycle: creation, persistence, sharing, deletion.<ref name="MSLearn2"/>
'''[[Anchor density management]]''' includes creating mesh hierarchy with main anchor plus secondary anchors for detail, avoiding overloading small areas with too many anchors, deleting unused anchors to stay within quotas and reduce locate times, and considering anchor lifecycle: creation, persistence, sharing, deletion.<ref name="MSLearn2"/>


'''Error handling and user feedback''' should display clear messages when anchor tracking is limited ("Re-scanning environment to find anchor..."), guide users through rescanning process, and provide visual indicators of anchor status. Recovery strategies include re-scanning area regularly if objects drift more than 0.2 meters, creating fresh anchors if drift persists, and implementing fallback to stationary frame of reference for highly dynamic holograms.<ref name="Qualium_Challenges"/>
'''[[Error handling and user feedback]]''' should display clear messages when anchor tracking is limited ("Re-scanning environment to find anchor..."), guide users through rescanning process, and provide visual indicators of anchor status. Recovery strategies include re-scanning area regularly if objects drift more than 0.2 meters, creating fresh anchors if drift persists, and implementing fallback to stationary frame of reference for highly dynamic holograms.<ref name="Qualium_Challenges"/>


== Privacy and ethical considerations ==
== Privacy and ethical considerations ==
Line 367: Line 371:
=== AI and spatial computing convergence ===
=== AI and spatial computing convergence ===


'''On-device machine learning''' in spatial computing enables Apple's Create ML to train object tracking models directly from command line-converting 3D model files through ML training for visionOS tracking. '''Multimodal AI''' bridges spatial and business data, processing text, image, audio, and spatial data types simultaneously.<ref name="Deloitte_Trends">[https://www2.deloitte.com/us/en/insights/focus/tech-trends/2025/tech-trends-future-of-spatial-computing.html Deloitte Tech Trends 2025 – Future of Spatial Computing]</ref>
'''[[On-device machine learning]]''' in spatial computing enables Apple's Create ML to train object tracking models directly from command line-converting 3D model files through ML training for visionOS tracking. '''[[Multimodal AI]]''' bridges spatial and business data, processing text, image, audio, and spatial data types simultaneously.<ref name="Deloitte_Trends">[https://www2.deloitte.com/us/en/insights/focus/tech-trends/2025/tech-trends-future-of-spatial-computing.html Deloitte Tech Trends 2025 – Future of Spatial Computing]</ref>


'''Spatial AI''' at the intersection of spatial computing and AI/ML includes edge AI processors for local inference, spatial cameras and sensors for AR, hardware-in-the-loop simulation platforms, and geospatial data processing units. Deloitte Tech Trends 2025 predicts spatial computing will converge with '''agentic AI systems''' that are context-aware and capable of executing functions proactively, able to serve the right content at the right time without explicit commands, and multimodal while processing spatial, visual, and business data simultaneously.<ref name="Deloitte_Trends"/>
'''[[Spatial AI]]''' at the intersection of spatial computing and AI/ML includes edge AI processors for local inference, spatial cameras and sensors for AR, hardware-in-the-loop simulation platforms, and geospatial data processing units. Deloitte Tech Trends 2025 predicts spatial computing will converge with '''[[agentic AI]] systems''' that are context-aware and capable of executing functions proactively, able to serve the right content at the right time without explicit commands, and multimodal while processing spatial, visual, and business data simultaneously.<ref name="Deloitte_Trends"/>


=== Future applications ===
=== Future applications ===