|
|
| Line 1: |
Line 1: |
| {{Infobox technology | | {{Infobox technology |
| | name = Spatial anchor | | | name = Spatial anchor |
| | domain = [[Augmented reality]] / [[Mixed reality]] | | | domain = [[Augmented reality]] / [[Mixed reality]] |
| | purpose = Fixing virtual content to physical locations | | | purpose = Fixing virtual content to physical locations |
| | used_by = [[ARKit]], [[ARCore]], [[Microsoft HoloLens]], [[Magic Leap]], [[Meta Quest]], [[OpenXR]] | | | used_by = [[ARKit]], [[ARCore]], [[Microsoft HoloLens]], [[Magic Leap]], [[Meta Quest]], [[OpenXR]] |
| }} | | }} |
|
| |
|
| '''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''6 degrees of freedom (6DOF)''' representation—three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''world-locked''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref> | | '''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MSLearnAnchors" /> In essence, a spatial anchor defines a point in the environment with a unique coordinate frame that the device will track over time, so that any digital content attached to that anchor remains accurately '''world‑locked''' (i.e., tied to a real‑world position and orientation) rather than floating or drifting as the user moves.<ref name="MSLearnAnchors" /><ref name="OpenXRMSFT" /> By rendering virtual objects relative to an anchor’s coordinate system, those objects appear fixed in the real world with minimal drift, even as the user changes their viewpoint or returns to the scene later.<ref name="MSLearnAnchors" /> |
|
| |
|
| By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''drift'''—where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref>
| | Spatial anchors are implemented on top of the device’s environmental tracking, typically using [[simultaneous localization and mapping]] (SLAM) and [[visual-inertial odometry]] to identify and track distinctive feature points and surface geometry in the environment.<ref name="ARCoreConcepts" /> Some platforms also support '''geospatial anchors''' that tie content to geographic coordinates (latitude/longitude/altitude) using a global localization system.<ref name="ARCoreGeospatialIOS" /><ref name="ARKitGeo" /> |
|
| |
|
| Spatial anchors enable three critical features in AR/MR applications: '''stability''' (ensuring virtual content stays precisely fixed in place), '''[[persistence (computer science)|persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''collaboration''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref>
| | == Technical overview == |
| | At a high level, an AR/MR system continuously estimates its pose (position/orientation) relative to a map of the environment and maintains anchor poses within that map: |
|
| |
|
| == Definition and core principles == | | * '''World‑locking and coordinate systems.''' Each spatial anchor establishes its own coordinate system. Rendering content in an anchor’s space gives the most precise positioning, while the system makes small adjustments over time to keep holograms aligned to the real world.<ref name="MSLearnAnchors" /> ARCore notes that as its understanding of the environment changes, the numerical locations of the camera and anchors can change to maintain correct relative positions.<ref name="ARCoreConcepts" /> |
| | * '''Creation and attachment.''' Applications create anchors programmatically or after a raycast/hit‑test against detected geometry; content is then attached to that anchor.<ref name="ARCoreAnchors" /><ref name="ARKitARAnchor" /> |
| | * '''Persistence and sharing.''' Platforms provide local (on‑device) and/or cloud mechanisms to persist and, in some cases, share anchors across sessions and devices (see below).<ref name="MSLearnAnchors" /><ref name="ARCoreCloudCodelab" /><ref name="MetaPersist" /> |
|
| |
|
| === World-locking and coordinate systems === | | == Types of anchors == |
| | | * '''Local anchors.''' Stored and resolved on the same device for single‑user persistence across sessions.<ref name="MSLearnAnchors" /> |
| Each spatial anchor establishes its own adjustable coordinate system that is defined relative to the unique features of the surrounding physical environment.<ref name="MSLearn"/> When an AR device renders a virtual object, it does so within this anchor-relative coordinate system. This technique provides the most precise and stable positioning for the object because the system is not relying on a single, global coordinate system that is prone to drift over large distances or time.<ref name="MSLearn"/>
| | * '''Cloud/shared anchors.''' Hosted to a cloud service so multiple devices can resolve the same real‑world anchor for multi‑user alignment (e.g., ARCore Cloud Anchors; Meta Spatial Anchors via OpenXR extensions).<ref name="ARCoreCloudCodelab" /><ref name="MetaPersist" /> |
| | | * '''Geospatial anchors.''' Anchors specified by WGS84 latitude/longitude/altitude (ARCore WGS84/Terrain/Rooftop) or platform‑specific location anchors (ARKit [[ARGeoAnchor]]).<ref name="ARCoreGeospatialIOS" /><ref name="ARKitGeo" /> |
| The AR system continuously updates its understanding of the environment through its sensors, making small, ongoing adjustments to the anchor's pose (position and orientation) to keep it perfectly aligned with the real world. These micro-corrections are what keep the attached virtual content world-locked and stable.<ref name="MSLearn"/> A critical characteristic is that the world coordinate system is not static—as the AR system's understanding of the environment changes, it adjusts its model of the world to keep things consistent, with the anchor's transformation automatically updated each frame to compensate for changes in the system's world model.<ref name="ARCoreConcepts">[https://developers.google.com/ar/reference/c/group/concepts Google ARCore – Concepts Documentation]</ref>
| |
| | |
| The pose is mathematically represented as a '''4x4 homogeneous transformation matrix''' combining rotation (3×3 matrix) and translation (3×1 vector), describing the rigid transformation from the object's local coordinate space to the world coordinate space.<ref name="ARCoreConcepts"/> This world-locking capability is essential for any AR experience that is not confined to a small, stationary area, creating a distributed network of local, stable reference points.<ref name="MSLearn"/>
| |
| | |
| === Feature-based tracking ===
| |
| | |
| Spatial anchors are fundamentally based on '''trackable feature points''' detected in the environment through [[computer vision]] algorithms.<ref name="Reko3D">[https://reko3d.com/blog/spatial-anchors/ Reko3D XR Glossary – Spatial Anchors (2024)]</ref> The AR platform detects distinctive visual features in camera images—such as corners, edges, T-junctions, and texture patterns—using algorithms like ORB (Oriented FAST and Rotated BRIEF), [[SIFT]] (Scale-Invariant Feature Transform), or SURF (Speeded Up Robust Features).<ref name="JaklAnalysis">[https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/ Andreas Jakl – Basics of AR Anchors and Feature Detection]</ref>
| |
| | |
| These algorithms extract descriptors from features, create a sparse point cloud, and track features across frames to estimate camera motion. When creating an anchor, the system captures environmental data around the anchor point including visual features and depth information if available, computes a transformation matrix representing the anchor's pose, and stores feature descriptors associated with that location.<ref name="JaklAnalysis"/> The system then continuously updates the anchor's position as its understanding improves, preventing the drift that would otherwise occur.
| |
| | |
| === Geospatial anchoring ===
| |
| | |
| In some cases, spatial anchors can be defined using geospatial data such as GPS coordinates and maps, allowing virtual content to be tied to a specific latitude and longitude in the real world.<ref name="Reko3D"/> AR platforms now support '''geospatial anchors''' that let developers place content at global positions—anchoring virtual objects by latitude, longitude, and altitude without needing to scan the immediate surroundings.<ref name="ARCoreGeo">[https://developers.googleblog.com/en/make-the-world-your-canvas-with-the-arcore-geospatial-api Google Developers Blog – ARCore Geospatial API Announcement (2022)]</ref> These anchors leverage [[Visual Positioning System]] (VPS) technology that uses pre-captured imagery databases (such as [[Google Street View]]) with machine learning to extract 3D points and match device camera feeds against VPS models, providing centimeter-level accuracy where available.<ref name="NianticVPS">[https://lightship.dev/docs/ardk/3.6/features/lightship_vps/ Niantic Lightship VPS Documentation – Persistent Location Anchors]</ref>
| |
| | |
| == Technical implementation ==
| |
| | |
| === Visual-Inertial Odometry ===
| |
| | |
| Spatial anchors are implemented on top of the device's environmental tracking capabilities, using techniques like [[simultaneous localization and mapping]] (SLAM) to identify visual feature points or surface geometry in the environment.<ref name="Reko3D"/> Modern AR systems achieve robust tracking through '''Visual-Inertial Odometry (VIO)''', which fuses data from camera sensors with [[Inertial Measurement Unit]] (IMU) sensors—combining accelerometers (measuring linear acceleration) and gyroscopes (measuring rotational velocity).<ref name="VSLAM_MDPI">[https://www.mdpi.com/1424-8220/24/4/1161 MDPI Sensors – Enhancing Outdoor Location-Based AR Anchors Using Visual SLAM]</ref>
| |
| | |
| Visual tracking provides high accuracy but can fail with motion blur, low texture, or rapid movement, while IMU tracking works well during rapid motion but suffers from drift over time. The fusion of these complementary strengths enables the smooth, real-time motion tracking necessary for stable spatial anchors.<ref name="VIO_Research">[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712971/ PMC – Adaptive Monocular Visual-Inertial SLAM for Real-Time AR Applications]</ref> The VIO process involves IMU preintegration (high-frequency measurements at 100-200 Hz integrated between camera frames), visual feature tracking (extracting features from images at 20-60 Hz), and tightly-coupled fusion that combines visual and inertial measurements in unified optimization.<ref name="VIO_Research"/>
| |
| | |
| === Depth sensing technologies ===
| |
| | |
| Depth sensing technologies enhance spatial anchor accuracy through multiple approaches. '''Time-of-Flight (ToF) sensors''' emit infrared light and measure return time for direct depth measurement per pixel, used in devices like [[Microsoft HoloLens]] and some smartphones. '''Structured light''' projects known infrared patterns and analyzes pattern deformation to compute depth, while '''stereo vision''' uses two cameras with known baseline to triangulate depth from disparity.<ref name="DepthSensing">[https://www.slamcore.com/technology/ Slamcore – Next-Level Spatial Intelligence Technology]</ref> These depth sensing methods provide benefits including improved scale estimation (monocular SLAM has scale ambiguity), enhanced plane detection accuracy, more precise anchor placement, and improved tracking robustness in textureless environments.<ref name="DepthSensing"/>
| |
| | |
| === Usage guidelines ===
| |
| | |
| Once created, a spatial anchor remains fixed at its real-world location and is not meant to be moved arbitrarily. Anchors are generally used for virtual elements intended to stay in one place (such as a holographic sign positioned on a wall) rather than for moving objects.<ref name="MSLearn2">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors Usage Guidelines (2025) ]</ref> Many AR frameworks treat anchors as relatively heavy-weight tracking objects—the system will continually adjust and refine the anchor's position to keep it aligned with the environment, which can incur computational cost.<ref name=" UnityAnchor" >[https://docs.unity3d.com/Packages/ [email protected]/manual/anchor-manager.html Unity AR Foundation Documentation – ARAnchorManager]</ref> | |
| | |
| Developers are advised to use spatial anchors sparingly and only for content that truly needs persistent world-locking, and to avoid attaching fast-moving or dynamic objects to anchors.<ref name="UnityAnchor"/><ref name="MSLearn2"/> A critical limitation documented by Microsoft is that '''holograms greater than 3 meters from anchor origin experience noticeable positional errors''' due to lever-arm effects—angular errors are small when close to the anchor but become magnified at distance.<ref name="MSLearn2"/> The recommended solution is creating new anchors for distant objects rather than using a single anchor, treating the 3-meter range as an effective radius for anchor accuracy.<ref name="MSLearn2"/>
| |
| | |
| == History and development ==
| |
| | |
| === Academic foundations (1960s-1990s) ===
| |
| | |
| The foundational technologies underlying spatial anchors emerged over decades. Visual odometry research began in the '''1960s''' when Stanford University built a lunar rover establishing early foundations for estimating position from visual information.<ref name="HistoryVO">[https://rpg.ifi.uzh.ch/research_vo.html University of Zurich – Visual Odometry Research History]</ref> '''Hans Moravec''' demonstrated correspondence-based stereo navigation in '''1980''' using feature matching between stereo images.<ref name="HistoryVO"/>
| |
| | |
| The genesis of probabilistic SLAM occurred in '''1986''' at the IEEE Robotics and Automation Conference in San Francisco, considered the origin point of SLAM research. The term "SLAM" was first used in '''1995''' at the Seventh International Symposium of Robotics Research in Munich, Germany by Durrant-Whyte and colleagues.<ref name="SLAMHistory">[https://onlinelibrary.wiley.com/doi/10.1155/2021/2054828 Wiley – Visual and Visual-Inertial SLAM: State of the Art]</ref>
| |
| | |
| '''Ronald Azuma''' published his seminal survey "A Survey of Augmented Reality" in '''1997''' in Presence: Teleoperators and Virtual Environments, becoming the most referenced publication in the AR field. This work defined AR requirements, described registration problems and sensing errors, and summarized developments that would eventually enable spatial anchors.<ref name="AzumaSurvey">[https://en.wikipedia.org/wiki/Augmented_reality Wikipedia – Augmented Reality History]</ref> The term "visual odometry" was coined by '''Nistér et al. in 2004'''.<ref name="HistoryVO"/>
| |
| | |
| === From marker-based to markerless tracking ===
| |
| | |
| Early AR relied on '''fiducial markers'''—QR codes and printed patterns that required pre-placed markers in environments. Software libraries like '''ARToolKit''', first released in 2000, allowed AR applications to recognize specific physical markers and use them as stable anchor points for virtual content.<ref name="ARToolKit">[https://www.assemblrworld.com/blog/history-of-augmented-reality Assemblr – The History of Augmented Reality]</ref> This marker-based approach was robust but limited the AR experience to locations where these predefined markers could be placed.
| |
| | |
| The true breakthrough for modern spatial anchors was the development and consumerization of '''markerless tracking''', powered by SLAM algorithms. This innovation shifted the burden of recognition from a simple physical marker to the AR system's ability to understand the geometry and unique visual features of the entire surrounding environment, allowing anchors to be placed anywhere in a recognized space.<ref name="RecreateFAQ"/>
| |
| | |
| === Google's Project Tango and ARCore (2014-2018) ===
| |
| | |
| Google ATAP (Advanced Technology and Projects) launched '''Project Tango in 2014''', the first project to graduate from Google X. Tango used specialized hardware including RGB cameras, motion tracking cameras, IR depth sensors, accelerometers, gyroscopes, and the Movidius Myriad 1 vision processor.<ref name="TangoHistory">[https://en.wikipedia.org/wiki/Tango_(platform) Wikipedia – Google Tango Platform]</ref> The '''Lenovo Phab 2 Pro''' launched in August 2016 at $499 as the first commercial Tango smartphone, followed by the '''ASUS ZenFone AR''' announced at CES 2017 as the second Tango device.<ref name="TangoHistory"/>
| |
| | |
| On December 15, 2017, Google announced ending Tango support effective March 1, 2018. The specialized hardware approach proved too limiting for mass adoption, paving the way for software-only solutions.<ref name="TangoHistory"/> Google released '''ARCore preview on August 29, 2017''' as its response to Apple's ARKit, built on Tango technology but without specialized hardware requirements. '''ARCore 1.0 launched February 23, 2018''' at Mobile World Congress in Barcelona, supporting 13 device models and achieving its target of 100 million devices by end of preview.<ref name="ARCoreHistory">[https://developers.googleblog.com/2018/02/announcing-arcore-10-and-new-updates-to.html Google Developers Blog – ARCore 1.0 Launch Announcement]</ref>
| |
| | |
| === Apple's ARKit revolution (2017-present) ===
| |
| | |
| Apple announced '''ARKit on June 5, 2017''' at WWDC in San Jose, releasing it with iOS 11 beta and Xcode 9 beta. Described as the "single most important announcement" from WWDC 2017 by analysts, ARKit instantly created an AR platform for over 100 million iOS devices.<ref name="ARKitHistory">[https://developer.apple.com/augmented-reality/arkit/ Apple Developer – ARKit Overview]</ref> The technology uses Visual Inertial Odometry combining camera sensor data with CoreMotion data, featuring motion tracking, horizontal plane detection, and light estimation on devices running iOS 11 with A9 processor or later.<ref name="ARKitHistory"/>
| |
| | |
| '''ARKit 1.0 publicly released September 19, 2017''' with iOS 11, enabling developers to publish ARKit apps to the App Store. '''ARKit 1.5''' released March 29, 2018 with iOS 11.3, adding 2D image recognition, vertical plane detection, and auto-focus improvements. '''ARKit 2.0''' announced June 2018 at WWDC introduced persistent AR experiences (save and resume), shared AR experiences (multiplayer, collaborative sessions), 2D image tracking of moving objects, and 3D object detection and recognition.<ref name="ARKitHistory"/>
| |
| | |
| '''ARKit 3.0''' announced June 2019 added People Occlusion (AR content naturally in front of/behind people), Motion Capture, multiple face tracking (up to 3 faces), and simultaneous front and back camera use. '''ARKit 4.0''' announced June 2020 introduced [[LiDAR]] Scanner support for iPhone 12 Pro and iPad Pro 2020, enabling instant AR object placement without scanning, improved scene geometry, Location Anchors feature, and Depth API. '''ARKit 6.0''' announced June 2022 added 4K video capture for high-resolution AR and expanded Location Anchors to additional cities worldwide.<ref name="ARKitHistory"/>
| |
| | |
| === Microsoft Azure Spatial Anchors (2018-2024) ===
| |
| | |
| Microsoft announced '''Azure Spatial Anchors on February 24-25, 2018''' at Mobile World Congress in Barcelona, alongside HoloLens 2. This cross-platform service supported HoloLens, iOS (ARKit), and Android (ARCore), enabling collaborative, spatially-aware mixed reality applications through cloud-based anchor sharing and persistence.<ref name="ASA">[https://azure.microsoft.com/en-us/blog/announcing-azure-spatial-anchors-for-collaborative-cross-platform-mixed-reality-apps/ Microsoft Azure Blog – Announcing Azure Spatial Anchors (2019)]</ref> The service provided a common coordinate frame across different devices and platforms, addressing a critical gap in cross-platform AR experiences.
| |
| | |
| Microsoft announced the '''retirement of Azure Spatial Anchors effective November 20, 2024''', requiring enterprises to migrate to alternative platforms.<ref name="ASA_Sunset">[https://www.multiset.ai/post/azure-spatial-anchors-alternative MultiSet AI – Azure Spatial Anchors Alternative (2024)]</ref> Despite its retirement, Azure Spatial Anchors demonstrated the viability of cloud-based, cross-platform spatial computing at global scale with millions of persistent 3D objects.
| |
|
| |
|
| == Persistence and sharing == | | == Persistence and sharing == |
| | A major benefit of spatial anchors is the ability to '''persist''' content across app sessions and '''share''' content between users: |
|
| |
|
| A major benefit of spatial anchors is the ability to '''persist''' virtual content across app sessions and to '''share''' content between multiple users in the same location. AR applications can save the state of local anchors (for example, writing them to device storage) and load them in a future session so that previously placed objects reappear in the same physical spot.<ref name="MSLearn3">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Persisting and Sharing Spatial Anchors (2025)]</ref><ref name="MetaAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-basic-tutorial/ Meta Developers – Spatial Anchors Tutorial]</ref> For instance, a user could place virtual furniture in their room, close the app, and later reopen it to find the furniture anchored exactly where it was left.
| | * On [[Microsoft HoloLens]], applications can save and reload local anchors so that holograms remain in place between sessions on that device.<ref name="MSLearnAnchors" /> |
| | | * [[ARKit]] enables session persistence by saving an [[ARWorldMap]] (a serialized snapshot of the session’s mapping data and anchors) which can be restored later; ARKit also supports location anchors via ARGeoAnchor on compatible devices/regions.<ref name="ARKitWorldMap" /><ref name="ARKitGeo" /><ref name="ARKitTrackGeo" /> |
| === Local persistence ===
| | * [[ARCore]] provides ''Cloud Anchors'' so developers can host anchor data and later resolve it on other devices; with ''Persistent Cloud Anchors'' the time‑to‑live can be configured from 1 to 365 days (with appropriate authorization mode).<ref name="ARCoreCloudCodelab" /><ref name="ARCoreCloudQuickstart" /><ref name="ARCoreWhatsNew" /> |
| | | * [[Meta Quest]] (via OpenXR) exposes spatial anchors that can be saved, loaded, and shared; Meta’s OpenXR extensions (e.g., <code>XR_FB_spatial_entity</code> and related storage/query extensions) enable persistence across sessions.<ref name="MetaPersist" /><ref name="XRFBSpatialEntity" /> |
| On [[Microsoft HoloLens]], local anchors can be persisted to disk (via a WorldAnchorStore) so that holograms remain in place between uses of the app on that device.<ref name="MSLearn3"/> Likewise, [[ARKit]] allows saving an '''ARWorldMap''' which contains anchor data to restore a prior session's anchors on the same device. An ARWorldMap is a serialized object that contains a snapshot of the AR session's spatial mapping data, including the positions of all ARAnchor objects that have been created.<ref name="ARKit_WorldMap">[https://developer.apple.com/documentation/arkit/arworldmap Apple Developer – ARWorldMap Documentation]</ref> | | * [[Magic Leap]] 2 supports local spatial anchors via its OpenXR‑based subsystem, and its earlier '''AR Cloud''' offering is noted as deprecated in current developer docs.<ref name="MagicLeapAnchors" /> |
| | |
| This ARWorldMap object can be saved to a file on the device and then reloaded in a future session to restore the anchors and their associated content. Limitations of local persistence include accessibility only on the same device, requiring similar lighting and environment conditions, and needing feature-rich environments for relocalization.<ref name="Qualium_Challenges">[https://www.qualium-systems.com/blog/what-are-spatial-anchors-and-why-they-matter/ Qualium Systems – Spatial Anchors Challenges]</ref>
| |
| | |
| === Cloud-based sharing ===
| |
| | |
| To enable multi-user experiences, spatial anchors can be shared across devices by using cloud services or networked sessions. A shared anchor serves as a common reference point so that two or more users see virtual content appearing at the same real-world location from their own perspectives. There are several cloud-based anchor services in the industry designed for this purpose.
| |
| | |
| '''Azure Spatial Anchors''' (ASA) by Microsoft provided a cloud backend to which an application could upload local anchors and later retrieve them on another device (or by another user), enabling collaborative mixed reality across [[HoloLens]], iOS, and Android.<ref name="ASA"/> ASA established a common coordinate frame for shared experiences without requiring any QR codes or prior environmental setup—every device that located the Azure anchor would align its content to the exact same physical spot in the world.<ref name="ASA"/> The service worked by creating a cloud-based representation of an anchor's surrounding environment using feature descriptors (not actual images) that could be accessed by other devices via a unique ID.<ref name="ASA"/>
| |
| | |
| Google's [[ARCore]] provides a similar capability with its '''Cloud Anchors''' API (introduced in 2018). Cloud Anchors allow ARCore developers to host anchor data on a Google-managed cloud service, so that anchors (and the attached AR content) can be resolved on different devices and even across Android and iOS.<ref name="GoogleBlog2018">[https://developers.googleblog.com/2020/10/improving-shared-ar-experiences-with-cloud-anchors.html Google Developers Blog – Improving Shared AR Experiences with Cloud Anchors (2020)]</ref><ref name="ARCoreCloud">[https://developers.google.com/ar/develop/java/cloud-anchors/quickstart ARCore Developer Guide – Cloud Anchors Quickstart]</ref>
| |
| | |
| The '''hosting process''' involves a user placing an anchor in their environment, with the ARCore SDK uploading visual data describing the features around the anchor to Google's servers (discarded within 24 hours for privacy). The service processes this data and returns a unique Cloud Anchor ID.<ref name="ARCoreCloud"/> The '''resolving process''' has other users' devices use this ID to query Google's service, which compares the visual features of their current environment with the stored data to find the anchor's original position.<ref name="ARCoreCloud"/>
| |
| | |
| Initially, Cloud Anchors had a 24-hour time limit (anchors would expire after a day), but in 2020 Google launched '''persistent Cloud Anchors''' which can last much longer (on the order of 1 to 365 days) to support content that users can return to over time.<ref name="GoogleBlog2018"/> Using ARCore's cloud service, multiple people running the same app can place and see shared AR objects in a scene—for example, leaving AR graffiti that others can discover later at the same real-world location.
| |
| | |
| In 2022, Google expanded this concept with the ARCore '''Geospatial API''', which leverages global mapping data (Street View imagery) to let developers anchor content by latitude and longitude in many cities worldwide.<ref name="ARCoreGeo"/> This effectively creates an '''AR cloud''' of world-anchored content: end-users can point their device at a known location and instantly retrieve virtual content that is tied to that place. Three types of geospatial anchors are supported: '''WGS84 Anchors''' (absolute latitude/longitude/altitude coordinates), '''Terrain Anchors''' (latitude/longitude with altitude relative to ground determined by VPS), and '''Rooftop Anchors''' (latitude/longitude with altitude relative to building rooftops).<ref name="ARCoreGeo"/>
| |
| | |
| === Platform-specific sharing capabilities ===
| |
| | |
| Other companies have their own spatial anchor solutions. [[Meta]]'s VR/AR platform (for devices like the Meta Quest) supports spatial anchors that can be saved locally on the headset to persist virtual objects in the user's physical space, and it offers '''Shared Spatial Anchors''' for local multi-user experiences (allowing people in the same room to see each other's anchored content).<ref name="MetaAnchors"/> Sharing anchors on the Quest platform requires a third-party networking solution, such as Photon, to handle the transmission of anchor data between users.<ref name="Meta_SharedAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-shared-spatial-anchors/ Meta for Developers – Shared Spatial Anchors]</ref>
| |
| | |
| [[Magic Leap]] 2 supports spatial anchors through its '''Spaces''' feature: users map an environment (creating a "Space") and can place anchors within that space which remain persistent between sessions on that device.<ref name="MagicLeap"/> The Magic Leap 2 can store several localized spaces (each containing anchors), though cloud sharing of those anchors was part of the now-deprecated Magic Leap "AR Cloud" platform. | |
| | |
| Niantic's [[Lightship]] platform uses a Visual Positioning System (VPS) to allow persistent '''location-based anchors''': developers can place anchors at specific real-world locations (such as a public landmark), and any user who comes to that location with a VPS-enabled app can discover and display the anchored content there.<ref name="NianticVPS"/> Niantic's Lightship VPS provides centimeter-level accuracy for AR device localization with over 1 million VPS-enabled locations worldwide.<ref name="Niantic_Enterprise">[https://www.nianticspatial.com/blog/spatial-anchors-enterprise-readiness Niantic Spatial – Spatial Anchors Enterprise Readiness (2025)]</ref>
| |
|
| |
|
| == Support in major AR frameworks == | | == Support in major AR frameworks == |
| | * '''Apple [[ARKit]]''' — The base '''ARAnchor''' represents “a real‑world position and orientation,” with platform features that include plane/image/object anchors; persistence can be achieved via [[ARWorldMap]]; ARKit also supports '''location anchors''' (ARGeoAnchor) in supported regions.<ref name="ARKitARAnchor" /><ref name="ARKitWorldMap" /><ref name="ARKitGeo" /><ref name="ARKitTrackGeo" /> |
| | * '''Google [[ARCore]]''' — Provides an '''Anchor''' API for local anchors and ''Cloud Anchors'' (including persistent TTL) for cross‑device sharing; the '''Geospatial API''' supports WGS84, Terrain, and Rooftop anchors using Google’s VPS.<ref name="ARCoreAnchors" /><ref name="ARCoreCloudCodelab" /><ref name="ARCoreGeospatialIOS" /> |
| | * '''Microsoft [[Windows Mixed Reality]] / [[Microsoft HoloLens]]''' — Anchors (sometimes called ''world anchors'') keep holograms fixed; Microsoft’s design guidance covers anchor creation, placement, and trade‑offs. ''Azure Spatial Anchors'' (ASA) was retired on November 20, 2024.<ref name="MSLearnAnchors" /><ref name="ASALifecycle" /> |
| | * '''[[Meta Quest]] (OpenXR runtime)''' — Spatial anchors with save/load/share operations and best‑practice guidance; persistence and discovery use Meta’s OpenXR spatial entity extensions.<ref name="MetaPersist" /><ref name="XRFBSpatialEntity" /> |
| | * '''[[Magic Leap]] 2''' — OpenXR‑based spatial anchor subsystem with AR Foundation integration; Spaces and earlier AR Cloud notes are documented in Magic Leap’s developer site.<ref name="MagicLeapAnchors" /> |
| | * '''[[OpenXR]] standard''' — Includes vendor extensions for anchors, notably Microsoft’s <code>XR_MSFT_spatial_anchor</code> and <code>XR_MSFT_spatial_anchor_persistence</code>, and Meta’s <code>XR_FB_spatial_entity</code> family for persistence and sharing.<ref name="OpenXRMSFT" /><ref name="OpenXRPersist" /><ref name="XRFBSpatialEntity" /> |
|
| |
|
| Because spatial anchoring is fundamental to AR experiences, most AR frameworks and devices provide support for anchors.
| | === Comparison of major APIs === |
| | |
| === Apple ARKit === | |
| | |
| ARKit represents anchors with the `ARAnchor` class, which Apple defines as "a real-world position and orientation that can be used for placing objects in an AR scene."<ref name="ARKitAnchor"/> ARKit provides '''10+ specific anchor types''', all inheriting from the base ARAnchor class: '''ARAnchor''' (base class for position and orientation), '''ARPlaneAnchor''' (horizontal and vertical surfaces with semantic classification), '''ARImageAnchor''' (tracked images with scale estimation), '''ARObjectAnchor''' (real-world 3D objects), '''ARBodyAnchor''' (human body positions), '''ARFaceAnchor''' (facial tracking), '''ARGeoAnchor''' (geographic locations using GPS and visual positioning, ARKit 4.0+), '''ARMeshAnchor''' (polygonal mesh geometry using LiDAR, ARKit 3.5+), '''ARParticipantAnchor''' (multi-user AR experiences), '''AREnvironmentProbeAnchor''' (environmental lighting), and '''ARAppClipCodeAnchor''' (App Clip Code tracking, ARKit 4.0+).<ref name="ARKitDocs">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer – ARAnchor Documentation]</ref>
| |
| | |
| ARKit automatically generates some anchors (for example, when planar surfaces or images are detected, it creates plane anchors or image anchors), and developers can add their own anchors at arbitrary positions. ARKit does not have a built-in cloud anchor service, but it allows multi-user sharing by merging AR sessions: an app can share a map (which contains anchors) with another device to synchronize their coordinate space. In ARKit 4, Apple introduced '''Location Anchors (ARGeoAnchor)''', which use high-resolution Apple Maps data in certain cities to anchor content to real-world coordinates (latitude, longitude, altitude)—enabling experiences where AR content is tied to specific landmarks or locations.<ref name="AppleLocation">[https://www.apple.com/newsroom/2020/06/ios-14-offers-new-features/ Apple Newsroom – ARKit 4 Location Anchors (2020)]</ref>
| |
| | |
| Core anchor properties include `transform` (simd_float4x4 matrix encoding position, orientation, and scale relative to world coordinate space), `name` (string identifier), and `identifier` (unique UUID). ARKit uses visual-inertial odometry for tracking with automatic pose updates as tracking improves, and supports relocalization (iOS 11.3+) for session resumption after interruptions.<ref name="ARKitDocs"/>
| |
| | |
| === Google ARCore ===
| |
| | |
| ARCore provides an `Anchor` class in its API for locking a virtual object's position. ARCore anchors are often created after a '''hit test''' (raycast) against the environment or attached to detected plane surfaces. ARCore provides four anchor types: '''Local Anchors''' (stored locally, valid for single app instance), '''Cloud Anchors''' (hosted in Google Cloud, shareable between devices/users), '''Persistent Cloud Anchors''' (ARCore 1.20+, configurable lifetime from 1 to 365 days), and '''Geospatial Anchors''' (based on GPS coordinates plus Visual Positioning System).<ref name="ARCoreAnchors">[https://developers.google.com/ar/develop/anchors Google ARCore – Working with Anchors]</ref>
| |
| | |
| In addition to local anchors, ARCore's Cloud Anchors (and persistent Cloud Anchors) enable saving and sharing anchors via Google's cloud. ARCore also offers geospatial anchors through the Geospatial API, using global VPS; developers can create a WGS84 anchor by specifying a latitude, longitude, and altitude, and ARCore will align it using localization from satellite imagery and Street View data.<ref name="ARCoreGeo"/>
| |
| | |
| Visual data uploaded for Cloud Anchors is discarded within 24 hours. Feature map quality can be estimated using `estimateFeatureMapQualityForHosting()` before hosting. Up to 40 concurrent Cloud Anchor operations are supported. ARCore requires stable internet connectivity for cloud operations, with best performance in room-scale AR experiences in areas with distinguishable visual features.<ref name="ARCoreCloud"/>
| |
| | |
| === Microsoft Windows Mixed Reality ===
| |
| | |
| On HoloLens and other WMR devices, spatial anchors (sometimes called ''world anchors'') are used to keep holograms fixed in place. The device continuously refines the anchors using its spatial mapping and tracking system. Microsoft provided the '''Azure Spatial Anchors''' cloud service for cross-platform persistence: developers could create an anchor on a HoloLens (as a `CloudSpatialAnchor` in the SDK), upload it to Azure, then locate that same anchor on an iPhone or Android device, allowing a shared experience.<ref name="ASA"/>
| |
| | |
| Locally, HoloLens also allowed saving anchors to disk and exporting/importing them (via a WorldAnchorTransferBatch in Unity) to persist holograms between app runs.<ref name="MSLearn3"/> The HoloLens system has a limit on how far content can be from an anchor before stability decreases—Microsoft recommends keeping holograms within a few meters of their anchor for best accuracy, specifically noting that holograms greater than 3 meters from anchor origin experience noticeable positional errors.<ref name="MSLearn2"/>
| |
| | |
| Azure Spatial Anchors provided unique features including '''Azure Active Directory integration''' for enterprise-grade access control, '''Azure Security''' with confidential computing, backup, and monitoring, and '''IoT Integration''' combined with Azure Digital Twins for spatial intelligence. Privacy protections included data segregation (each customer controls own spatial anchor data store with no cross-subscription sharing), image-less anchor creation (images processed locally with only feature descriptors transmitted to cloud), and pose-only anchor queries (returning only 6DoF pose by default without image data).<ref name="ASA_Privacy">[https://azure.microsoft.com/en-us/blog/azure-spatial-anchors-privacy-and-security/ Microsoft Azure Blog – Azure Spatial Anchors Privacy and Security]</ref>
| |
| | |
| === Meta (Oculus) Quest === | |
| | |
| Meta's mixed reality SDK supports spatial anchors for its VR/AR headsets. Developers using the Meta Oculus SDK or OpenXR on Quest devices can create anchors (for example with the `OVRSpatialAnchor` in Unity) to persist virtual objects in a room. These anchors can be saved to the device's storage so that content remains in place across sessions (Quest headsets can store a certain number of anchors in their local memory).<ref name="MetaAnchors"/>
| |
| | |
| Meta provides a '''Shared Spatial Anchors''' feature that lets devices in proximity share anchor data with each other for co-located multiplayer experiences (two users in the same room see the same virtual object anchored on a table). In the Meta framework, an anchor is described as a "world-locked frame of reference" for content, underscoring that it maintains its position in the real world rather than moving with the user.<ref name="MetaAnchors"/> Meta supports two sharing models: an older user-based model that requires Oculus User IDs and a newer, recommended group-based model that uses a shared UUID to simplify the sharing process.<ref name="Meta_SharedAnchors"/>
| |
| | |
| The `OVRSpatialAnchor` component in Unity enables creating anchors at specific positions, saving anchors locally or to Meta servers, loading previously saved anchors, erasing anchors when no longer needed, and querying for nearby anchors. Configuration requires Anchor Support enabled, Shared Spatial Anchor Support set to Supported, and Enhanced Spatial Services enabled in device settings.<ref name="MetaAnchors"/>
| |
| | |
| === Magic Leap ===
| |
| | |
| Magic Leap 1 and 2 devices use spatial anchors as part of their Space mapping. A Magic Leap user scans an environment to establish a Space (a map of the area); within that space, anchors can be placed to mark where virtual content should appear. Magic Leap 2 supports up to 5 local Spaces stored on the device, allowing a user to save multiple environments with their anchors.<ref name="MagicLeap"/>
| |
| | |
| When a user returns to a mapped Space, the device can localize itself and restore all anchors (and content) in that space. The platform uses a concept called "Spaces," which are persistent, scanned 3D maps of an environment. Within these Spaces, developers can create spatial anchors that can be stored either locally on the device or in the AR Cloud.<ref name="MagicLeapSpaces">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Developer Docs – Spaces and Spatial Anchors]</ref> While Magic Leap's earlier cloud service for sharing anchors (Cloud Anchor / AR Cloud) was phased out, developers can integrate third-party services (or use Magic Leap's local networking) for multi-user scenarios if needed.
| |
| | |
| === Unity AR Foundation ===
| |
| | |
| Unity AR Foundation provides a '''cross-platform abstraction''' for spatial anchors across ARKit (iOS), ARCore (Android), Magic Leap, Meta Quest, and HoloLens (via OpenXR). The `ARAnchor` component can be added via `AddComponent<ARAnchor>()` on any GameObject, with transforms automatically updated by AR Foundation.<ref name="UnityAnchor"/> Manual transform changes are not recommended as anchors enter a pending state before full tracking is established, queryable via the `ARAnchor.pending` property.
| |
| | |
| The `ARAnchorManager` manages the lifecycle of all anchors in the scene, providing an `anchorsChanged` event reporting added, updated, and removed anchors. The `AttachAnchor(ARPlane plane, Pose pose)` method attaches anchors to detected planes. The system translates between Unity world space and AR session space, compensating for tracking loss and session reinitialization.<ref name="UnityAnchor"/>
| |
| | |
| Best practices emphasize that anchors are resource-intensive on most platforms, recommending avoiding multiple anchors within 2 meters of each other, removing anchors when no longer needed, and considering disabling plane detection after initial scan. Platform-specific behaviors vary—Meta Quest planes are static (not dynamically updated) while ARKit/ARCore planes dynamically update at runtime, and different platforms may have varying anchor limits.<ref name="UnityAnchor"/>
| |
| | |
| === OpenXR standard ===
| |
| | |
| The cross-platform [[OpenXR]] API includes extensions for spatial anchors (notably `XR_MSFT_spatial_anchor` and `XR_MSFT_spatial_anchor_persistence`). These allow any OpenXR-compatible application (regardless of device vendor) to create anchors and, if supported, persist them for later or share them via a platform service.<ref name="OpenXR"/> For example, on a platform like Windows Mixed Reality or Meta's OpenXR runtime, these extensions tie into Azure Spatial Anchors or Meta's anchor store under the hood.
| |
| | |
| The presence of anchor support in OpenXR indicates that spatial anchors are considered a fundamental building block for XR applications across different hardware. OpenXR provides a vendor-agnostic API for spatial anchors across HoloLens, Quest, Magic Leap, and other XR devices, with Unity AR Foundation abstracting platform differences and a trend toward interoperable spatial experiences across hardware ecosystems.<ref name="OpenXR"/>
| |
| | |
| {| class="wikitable" | | {| class="wikitable" |
| |+ Comparison of major spatial anchor platforms | | |+ '''Selected anchor APIs and features''' |
| ! Feature | | ! Platform !! API / Feature !! Key capabilities |
| ! Apple ARKit | |
| ! Google ARCore | |
| ! Microsoft Azure Spatial Anchors (Retired) | |
| ! Meta Quest | |
| |- | | |- |
| ! Cross-Platform Support
| | | Apple [[ARKit]] || '''ARAnchor''', [[ARWorldMap]], '''ARGeoAnchor''' || Anchor placement and session persistence via ARWorldMap; location anchors in supported regions.<ref name="ARKitARAnchor" /><ref name="ARKitWorldMap" /><ref name="ARKitGeo" /> |
| | No (iOS/iPadOS/visionOS only) | |
| | Yes (Android, iOS) | |
| | Yes (HoloLens, iOS, Android) | |
| | No (Meta Quest devices only) | |
| |- | | |- |
| ! Persistence Mechanism
| | | Google [[ARCore]] || '''Anchor''', ''Cloud Anchors'' (1–365 day TTL), ''Geospatial (WGS84/Terrain/Rooftop)'' || Local & cloud anchors; cross‑device sharing; VPS‑backed geospatial anchors.<ref name="ARCoreAnchors" /><ref name="ARCoreCloudCodelab" /><ref name="ARCoreGeospatialIOS" /> |
| | Device-based; ARWorldMap serialization | |
| | Cloud-based; Google Cloud storage | |
| | Cloud-based; Azure storage | |
| | On-device storage for local; Meta Cloud for shared
| |
| |- | | |- |
| ! Anchor Lifespan
| | | Microsoft [[OpenXR]] (WMR/HoloLens) || <code>XR_MSFT_spatial_anchor</code>, <code>XR_MSFT_spatial_anchor_persistence</code> || Spatial anchor creation and persistence per the OpenXR registry.<ref name="OpenXRMSFT" /><ref name="OpenXRPersist" /> |
| | Indefinite (depends on ARWorldMap file storage) | |
| | Up to 365 days (Persistent Cloud Anchors API) | |
| | Persistent until explicitly deleted | |
| | Persistent until erased from device storage | |
| |- | | |- |
| ! Anchor Types
| | | Meta Quest (OpenXR) || <code>XR_FB_spatial_entity</code> (+ storage/query) || Save, load, erase, and share anchors across sessions/devices via spatial entities.<ref name="XRFBSpatialEntity" /><ref name="MetaPersist" /> |
| | 10+ types including geo-anchors, mesh anchors, plane anchors | |
| | Local, Cloud, Persistent Cloud, Geospatial anchors | |
| | Cross-platform cloud anchors
| |
| | Scene anchors (system-owned), Spatial anchors (app-owned)
| |
| |- | | |- |
| ! Primary Use Case
| | | [[Magic Leap]] 2 || Magic Leap Spatial Anchor Subsystem (OpenXR) || Local anchors through AR Foundation / OpenXR; developer docs detail setup and permissions.<ref name="MagicLeapAnchors" /> |
| | Consumer AR experiences within Apple ecosystem | |
| | Cross-platform mobile AR, location-based experiences | |
| | Enterprise, industrial, cross-platform collaboration | |
| | Co-located VR/MR multiplayer experiences | |
| |} | | |} |
|
| |
|
| == Applications and use cases == | | == Geospatial anchors == |
| | | AR at world scale is enabled by anchors tied to geographic coordinates: |
| Spatial anchors enable a wide range of transformative AR and MR applications across various industries.
| |
|
| |
|
| === Gaming and entertainment === | | * '''ARCore Geospatial API.''' Provides '''WGS84 Anchors''' (lat/long/alt), '''Terrain Anchors''' (lat/long with altitude relative to ground via VPS), and '''Rooftop Anchors''' (lat/long with altitude relative to building roofs).<ref name="ARCoreGeospatialIOS" /> |
| | * '''ARKit Location Anchors.''' ARKit’s [[ARGeoAnchor]] lets apps place anchors by latitude/longitude (and optionally altitude), with ARKit continually refining the position during geotracking sessions.<ref name="ARKitGeo" /><ref name="ARKitTrackGeo" /> |
|
| |
|
| '''Multiplayer gaming''' uses spatial anchors for co-located multiplayer games, where players in the same physical room can interact with a shared virtual world. In titles like '''Demeo''', a virtual game board is anchored to a physical table, allowing multiple players to see and interact with the same game state from their unique perspectives.<ref name="MetaDesignAnchors"/> Other notable examples include the robot battle game '''BAM''' and the AR dueling game '''Saber City'''.<ref name="Reddit_Apps">[https://www.reddit.com/r/OculusQuest/comments/17fr7nc/apps_and_games_with_shared_spatial_anchors/ Reddit – Apps and Games with Shared Spatial Anchors]</ref>
| | == Best practices and limitations == |
| | | * '''Keep content close to its anchor.''' Microsoft’s guidance highlights that rendering content in an anchor’s coordinate system is most precise, while large lever‑arms (objects far from their anchor) can magnify small angular errors; keep holograms close to their anchor and create additional anchors for distant content.<ref name="MSLearnAnchors" /> |
| '''Pokemon Go''' by Niantic uses AR+ Mode with ARCore/ARKit to anchor Pokemon to physical locations with centimeter accuracy through their Visual Positioning System. The '''Pokemon Playgrounds''' feature enables shared persistent AR experiences at PokéStops and Gyms where trainers can place Pokemon for others to discover. With 91+ million active players and 176+ million copies sold, Pokemon Go demonstrated the viability of world-scale AR gaming.<ref name="PokemonPlaygrounds">[https://nianticlabs.com/news/pokemon-playgrounds Niantic Labs – Pokemon Playgrounds Announcement]</ref>
| | * '''Use anchors for persistent world‑locked content, not fast movers.''' Dynamic objects are often better rendered in device‑centric or stationary frames rather than attached to anchors; platforms advise using anchors judiciously due to tracking and compute costs.<ref name="MSLearnAnchors" /><ref name="UnityARFAnchors" /> |
| | | * '''Environment quality matters.''' Feature‑poor, reflective, or poorly lit scenes make anchor creation/resolution harder; platforms recommend scanning from multiple viewpoints and ensuring adequate lighting when hosting or resolving cloud/geospatial anchors.<ref name="ARCoreAnchors" /><ref name="ARCoreCloudCodelab" /> |
| '''Minecraft Earth''' utilized Azure Spatial Anchors and PlayFab integration to create life-size AR experiences, allowing players to build and share persistent structures in the real world. The game featured "Adventures"—small slices of Minecraft worlds rendered in life-size AR on sidewalks and parks. Though later retired, it demonstrated the technical feasibility of large-scale collaborative AR gaming.<ref name="MinecraftEarth">[https://news.microsoft.com/features/minecraft-earth-azure-spatial-anchors/ Microsoft News – Minecraft Earth Technology]</ref>
| |
| | |
| === Collaborative design and productivity ===
| |
| | |
| Applications such as '''Spatial''' and '''Arkio''' leverage shared anchors to create a common virtual space where multiple users can co-create, manipulate, and review 3D models and designs in real-time. This effectively turns any room into a collaborative digital studio, enhancing creative and professional workflows.<ref name="Reddit_Apps"/> '''ShapesXR''' implements real-time co-building with shadows and copy-paste functionality between spaces using shared spatial anchors.<ref name="ShapesXR">[https://www.shapesxr.com/post/update-shadows-shared-spatial-anchors-copy-and-paste-between-spaces ShapesXR – Shared Spatial Anchors Update]</ref>
| |
| | |
| === Retail and commerce ===
| |
| | |
| '''IKEA Place''' launched in September 2017 as one of the first major ARKit implementations, featuring 2,000+ true-to-scale 3D furniture models with 98% accuracy. The app uses spatial anchors to place virtual furniture persistently in homes with updated features including multi-placement (place multiple items simultaneously), room sets (experience entire rooms with handpicked furniture), visual search (point camera at furniture to find similar IKEA products), and wishlist and sharing capabilities.<ref name="IKEAPlace">[https://www.ikea.com/global/en/newsroom/innovation/ikea-launches-ikea-place IKEA – IKEA Place Launch Announcement]</ref>
| |
| | |
| Built in just seven weeks, IKEA Place uses 3D renders from existing catalogs. Michael Valdsgaard (Digital Transformation Leader) commented: "Augmented reality and virtual reality will be a total game changer for retail in the same way as the internet. Only this time, much faster." The application demonstrates how spatial anchors enable consumers to visualize products in their actual spaces before purchase, reducing return rates and improving purchase confidence.<ref name="IKEAPlace"/>
| |
| | |
| Retail centers increasingly use spatial anchors for persistent signage, wayfinding, and promotional content. AR commerce applications provide real-time overlays for product information, pricing, and inventory in physical stores, transforming how consumers interact with retail environments.<ref name="Qualium_UseCases">[https://www.qualium-systems.com/blog/what-are-spatial-anchors-and-why-they-matter/ Qualium Systems – Real-World Use Cases]</ref>
| |
| | |
| === Industrial and enterprise ===
| |
| | |
| '''Remote assistance and maintenance:''' In industrial settings, an on-site technician wearing an AR headset can share their view with a remote expert. The expert can then place spatially anchored instructions, diagrams, or annotations directly onto the real-world machinery. '''ThyssenKrupp Elevator Service''' uses HoloLens with Azure Spatial Anchors, enabling remote experts to mark up machinery with virtual annotations visible to on-site technicians. This implementation reduced maintenance time by approximately 30%. Technicians see instructions anchored to specific machine parts, reducing errors and improving first-time fix rates.<ref name="Qualium_UseCases"/>
| |
| | |
| '''Warehouse logistics:''' '''Honeywell Connected Plant''' projects virtual arrows onto warehouse floors for optimized picking paths, improving order picking speed by approximately 25%. The persistence of spatial anchors ensures arrows remain accurate across shifts, and the solution has been deployed across multiple warehouse locations demonstrating scalability.<ref name="Qualium_UseCases"/>
| |
| | |
| '''Worker training:''' Complex procedures can be taught more effectively by anchoring step-by-step holographic instructions to specific parts of a machine or workspace. This allows trainees to learn in a hands-on, contextually relevant manner without risk to live equipment. Factory floor visualization enables workers to visualize machine status, navigate facilities, and access real-time IoT data overlaid on equipment.<ref name="Qualium_UseCases"/>
| |
| | |
| === Healthcare and medical education ===
| |
| | |
| '''Pearson Education''' uses nursing students and professors practicing diagnosing and treating virtual patients in 3D real-world settings using HoloLens and mobile devices with Azure Spatial Anchors cross-platform support. Jeff Mlakar from Case Western Reserve University stated: "We can reach more students, educators and families by uniting our experiences across mobile and HoloLens devices...With Spatial Anchors' cross-platform support, we can bring our curriculum to life in 3D and share it with everyone."<ref name="CaseWestern">[https://news.microsoft.com/transform/case-western-reserve-pearson-hololens-spatial-anchors/ Microsoft News – Case Western Reserve and Pearson Education]</ref>
| |
| | |
| VR platforms convert MRI/CT DICOM stacks into interactive 3D reconstructions for surgical planning, enabling pre-surgical rehearsals and multi-disciplinary team reviews. Surgical AR navigation provides intraoperative decision support with metric-accurate volumetric models and AI-driven segmentation for precise tool guidance. Medical training platforms use virtual reality training for emergency medical procedures, with spatial anchors enabling consistent placement of training scenarios.<ref name="Qualium_UseCases"/>
| |
| | |
| === Architecture and construction ===
| |
| | |
| Architecture and construction firms use spatial anchors for '''design review''' where architects and site workers review building plans overlaid on construction sites. Spatial planning enables visualization of proposed structures in real-world context, while progress tracking compares planned versus actual construction with persistent anchors marking key reference points.<ref name="RecreateFAQ"/>
| |
| | |
| Theatre set design uses Azure Object Anchors to identify objects (couches, props) and Azure Spatial Anchors to map stage locations for multi-scene prop placement. Museums and exhibits implement interactive exhibits with persistent holographic content, and smart city infrastructure deploys persistent AR overlays for navigation, information displays, and public services.<ref name="Qualium_UseCases"/>
| |
| | |
| === Education and navigation ===
| |
| | |
| In education, spatial anchors enable persistent educational content across classrooms and campuses with 3D curriculum visualization. Students explore complex subjects through 3D visualizations anchored to physical spaces, and multiple students can work on shared holographic content simultaneously in collaborative projects.<ref name="RecreateFAQ"/>
| |
| | |
| '''Indoor navigation:''' In large, complex venues such as airports, museums, or train stations, where GPS is unreliable, spatial anchors can be used to create persistent, turn-by-turn AR navigation paths. These paths can guide visitors directly to their gate, exhibit, or platform, enhancing the visitor experience.<ref name="RecreateFAQ"/>
| |
| | |
| '''Interactive museum exhibits:''' Museums can use spatial anchors to overlay historical information, 3D reconstructions of artifacts, or interactive animations directly onto their physical displays. This provides visitors with a richer, more engaging, and contextually layered educational experience.<ref name="RecreateFAQ"/>
| |
| | |
| == Technical challenges and limitations ==
| |
| | |
| Despite significant advancements, spatial anchor technology still faces several technical, practical, and systemic challenges that limit its reliability and widespread adoption.
| |
| | |
| === Environmental constraints ===
| |
| | |
| '''Feature-scarce environments''' present the primary challenge—empty white walls, uniform floors, and large glass areas lack visual features for anchor creation. Anchors fail to create or match reliably in these conditions. The practical impact is severe in modern office buildings with minimalist design.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations">[https://xreal.gitbook.io/nrsdk/development/spatial-anchor XREAL Developer Docs – Spatial Anchor Limitations]</ref>
| |
| | |
| '''Lighting conditions''' cause disruptions when abrupt changes occur (lights turning off/on, moving between dark and bright areas). Anchors may "jump" or temporarily disappear during adjustment. Documentation recommends even lighting and avoiding dramatic lighting changes for optimal anchor stability.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations"/>
| |
| | |
| '''Dynamic environments''' with moving objects (people, equipment) occlude reference features, causing tracking issues and anchor instability. This is particularly problematic in crowded spaces or busy warehouses where the environment constantly changes. '''Surface requirements''' exclude transparent, semi-transparent, or reflective surfaces—mirrors, glass, and glossy surfaces prevent the system from detecting and tracking features effectively.<ref name="Qualium_Challenges"/>
| |
| | |
| === Drift and accuracy issues ===
| |
| | |
| '''Scale drift''' occurs as small tracking errors accumulate over time, causing "drift" where virtual objects slowly diverge from intended positions. Drift becomes noticeable at greater than 0.2 meters deviation. Mitigation strategies include regular anchor updates, recalibration, and creating fresh anchors when drift exceeds acceptable thresholds.<ref name="Qualium_Challenges"/><ref name="MagicLeapDrift">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Docs – Anchor Drift Mitigation]</ref>
| |
| | |
| Sharing anchors between devices can introduce additional alignment errors, where each user sees the virtual content in a slightly different position—displacement can be subtle (4–5 cm) but often significant enough (up to 20 cm in some cases) to break the illusion of stability, especially for applications requiring high precision.<ref name="Meta_Drift">[https://communityforums.atmeta.com/discussions/dev-unity/spatial-anchors-issues Meta Community Forums – Spatial Anchors Issues]</ref>
| |
| | |
| '''Latency issues''' in anchor stabilization affect user experience—matching saved data to real-time visuals should occur in under 5 seconds ideally. Poor performance leads to user frustration and abandonment. Platform variance shows ARKit is typically faster than cloud anchors (ARCore/Azure Spatial Anchors) for initial anchor establishment.<ref name="Qualium_Challenges"/>
| |
| | |
| === Scalability and performance ===
| |
| | |
| Anchor tracking is a computationally intensive process. Maintaining a large number of anchors in a scene can consume significant system resources, leading to a lower application framerate and a degraded user experience.<ref name="UnityAnchor"/> Similarly, cloud-based systems can experience latency when resolving anchors, particularly if many anchors are clustered in a small area, as the system has more data to sift through.<ref name="Qualium_Challenges"/>
| |
| | |
| === Cross-platform interoperability ===
| |
| | |
| A major systemic challenge is the lack of a universal standard for spatial anchors. Anchors created using Apple's ARKit cannot be natively understood by a device running Google's ARCore, and vice versa.<ref name="Qualium_Challenges"/> This fragmentation creates walled ecosystems, forcing developers to either choose a single platform or invest significant resources in building complex, custom backend solutions to bridge the gap.
| |
| | |
| The discontinuation of Azure Spatial Anchors—a major cross-platform solution—further underscores the risks for developers who rely on a single proprietary, centralized service for this critical functionality.<ref name="ASA_Sunset"/><ref name="MS_Sunset_Impact">[https://www.mdpi.com/2076-3417/15/13/6959 MDPI – Cross-Platform Framework for Synchronizing Spatial Anchors]</ref>
| |
| | |
| === Storage and quota limitations ===
| |
| | |
| '''Storage limitations''' constrain large-scale deployments. ARCore Cloud Anchors offer free tier with 24-hour persistence and business tier with 365-day persistence. Azure Spatial Anchors provided default quota of 1,000 anchors per account (scalable with additional costs). Meta Quest anchors consume approximately 2.5MB each and expire after 24 hours in cloud (local persist until deleted). Management challenges include risk of losing UUID references creating "orphaned" anchors that consume storage but cannot be accessed.<ref name="Qualium_Challenges"/> | |
| | |
| '''Network dependencies''' mean cloud anchors require stable internet connectivity—hosting and resolving anchors fails without connection. This limits applicability in industrial environments with restricted network access or outdoor locations with poor coverage. Hybrid approaches combining local anchors for offline scenarios with cloud anchors for cross-device sharing provide partial mitigation.<ref name="Qualium_Challenges"/>
| |
| | |
| === Relocalization challenges ===
| |
| | |
| For persistent anchors to function, the user's device must successfully "re-localize"—that is, recognize the current environment by matching it to a previously saved map. This process can be a significant point of friction and failure. If the user starts the application from a different position than where the map was created, or if the environment has changed too much, re-localization may fail, and the persistent content will not appear.<ref name="SABIAT_Paper">[https://benswift.me/assets/documents/preprints/he_et_al_2021_spatial_anchor_based_indoor_asset_tracking.pdf CSIRO Research – Spatial-Anchor-Based Indoor Asset Tracking]</ref>
| |
| | |
| Designing an intuitive user experience that effectively guides a user to perform an adequate scan of an area to create a high-quality anchor, or to move their device to a location with enough visual information to re-localize, remains a key challenge for AR application developers. <ref name="ARCore_Unity_Persistent">[https://docs.unity3d.com/Packages/[email protected]/manual/features/anchors/persistent-anchors.html Unity Documentation – Persistent Anchors for ARCore]</ref> | |
| | |
| === Best practices for implementation ===
| |
| | |
| '''Environment scanning''' requires moving device slowly and sweeping camera across all surfaces, with scan duration of 5-15 seconds recommended. Developers should capture textures, furniture, and paintings for better feature detection while avoiding rapid viewpoint changes or head movements.<ref name="ARCoreCloud"/> | |
| | |
| '''Anchor density management''' includes creating mesh hierarchy with main anchor plus secondary anchors for detail, avoiding overloading small areas with too many anchors, deleting unused anchors to stay within quotas and reduce locate times, and considering anchor lifecycle: creation, persistence, sharing, deletion.<ref name="MSLearn2"/>
| |
| | |
| '''Error handling and user feedback''' should display clear messages when anchor tracking is limited ("Re-scanning environment to find anchor..."), guide users through rescanning process, and provide visual indicators of anchor status. Recovery strategies include re-scanning area regularly if objects drift more than 0.2 meters, creating fresh anchors if drift persists, and implementing fallback to stationary frame of reference for highly dynamic holograms.<ref name="Qualium_Challenges"/>
| |
| | |
| == Privacy and ethical considerations ==
| |
| | |
| The widespread adoption of spatial anchor technology raises significant privacy and ethical concerns, primarily stemming from the detailed environmental data required for them to function.
| |
| | |
| === Pervasive data collection ===
| |
| | |
| To create and maintain spatial anchors, AR/MR systems must first perform spatial mapping, a process that involves capturing and processing a detailed 3D representation of the user's environment.<ref name="Privacy_Data_Collection">[https://milvus.io/ai-quick-reference/what-are-the-privacy-concerns-related-to-ar-data-collection Milvus – Privacy Concerns Related to AR Data Collection]</ref> When this technology is used in private spaces such as homes, offices, or medical facilities, it creates machine-readable 3D maps of those personal environments. This data can inadvertently reveal highly sensitive information about an individual's lifestyle, habits, wealth, health conditions, or the presence of valuable possessions.<ref name="Privacy_Cloud">[https://pmc.ncbi.nlm.nih.gov/articles/PMC12301001/ PMC – Privacy-Preserving Local Spatial Anchors for Augmented Reality]</ref><ref name="Privacy_Data_Collection"/>
| |
| | |
| === Bystander privacy ===
| |
| | |
| AR devices, with their "always-on" sensors, capture data from their surroundings indiscriminately. This means they can record and map spaces that include other people who have not consented to having their image, location, or environment captured.<ref name="Privacy_BystandAR">[https://news.vt.edu/articles/2024/01/new-tech-addresses-augmented-reality-s-privacy-problem.html Virginia Tech News – New Tech Addresses AR Privacy Problem (2024)]</ref><ref name="Privacy_EFF">[https://www.eff.org/deeplinks/2020/10/augmented-reality-must-have-augmented-privacy Electronic Frontier Foundation – Augmented Reality Must Have Augmented Privacy]</ref>
| |
| | |
| This issue of "bystander privacy" was a major factor in the societal backlash against early wearables like Google Glass. While some systems may attempt to anonymize data by blurring faces and license plates, other identifying features such as tattoos, distinctive clothing, or even a person's gait can still be captured. The prospect of perpetual, crowd-sourced recording of public and private interactions poses a profound ethical challenge.<ref name="Privacy_EFF"/>
| |
| | |
| === Surveillance and data misuse ===
| |
| | |
| The centralized storage of 3D world maps, which is inherent to cloud anchor services, creates an incredibly rich dataset that is a potential target for misuse. This data could be used for pervasive surveillance by corporations or state actors, allowing them to infer behavioral patterns, track individuals' movements through mapped spaces, and create detailed profiles without their knowledge or consent.<ref name="Privacy_ITIF">[https://itif.org/publications/2021/03/04/balancing-user-privacy-and-innovation-augmented-and-virtual-reality/ ITIF – Balancing User Privacy and Innovation in AR/VR]</ref><ref name="Privacy_EFF"/>
| |
| | |
| The risk is compounded by the collection of biometric data; for example, eye-tracking data from an AR headset could reveal a user's focus, interests, or even emotional state, while facial scans can be collected without explicit consent.<ref name="Privacy_Data_Collection"/><ref name="Privacy_Biometric">[https://trustarc.com/resource/privacy-augmented-virtual-reality-platforms/ TrustArc – Data Privacy in Virtual and Augmented Reality Platforms]</ref>
| |
| | |
| === Data security ===
| |
| | |
| The 3D maps generated for spatial anchoring are a highly sensitive new class of personal data. A data breach of a cloud service storing this information could expose the detailed interior layouts of private homes, secure factory floors, or confidential corporate offices, posing significant security and safety risks.<ref name="Privacy_Data_Collection"/><ref name="Privacy_ITIF"/>
| |
| | |
| === Privacy protection measures ===
| |
| | |
| Major platforms have implemented privacy protections. Azure Spatial Anchors followed three principles: '''Data segregation''' (each customer controls their own spatial anchor data store with no data sharing between subscriptions), '''Image-less anchor creation''' (images processed locally on edge devices with only derived format feature descriptors transmitted to cloud—original images never stored), and '''Pose-only anchor queries''' (returning only 6DoF pose by default with no image data returned to querying devices).<ref name="ASA_Privacy"/>
| |
| | |
| Best practices include '''minimizing data collection''' (collecting only necessary spatial data, using pose information instead of full imagery), '''providing user control''' (explicit consent for spatial data capture, user access to view/delete stored anchors), and '''implementing technical safeguards''' (obfuscation techniques including pseudonyms and spatial cloaking, differential privacy for location data, secure cloud environments with FISMA and FedRAMP compliance).<ref name="Privacy_ITIF"/>
| |
| | |
| == Future developments ==
| |
| | |
| === The AR Cloud vision ===
| |
| | |
| The long-term vision for spatial anchor technology extends far beyond individual applications, culminating in the concept of the '''AR Cloud''', a persistent, shared, 1:1 scale digital twin of the real world.<ref name="Future_Seisan">[https://seisan.com/the-infrastructure-behind-persistent-augmented-reality/ Seisan – The Infrastructure Behind Persistent Augmented Reality]</ref><ref name="Future_MasterDC">[https://www.masterdc.com/blog/augmented-reality-technology-cloud/ MasterDC – Augmented Reality Technology Will Be in the Cloud]</ref>
| |
| | |
| The AR Cloud can be envisioned as a real-time 3D map of the world, continuously updated by millions of devices and accessible to any AR application.<ref name="Future_ABI">[https://www.abiresearch.com/press/ar-cloud-promises-high-value-future-proof-ar-use-cases-needs-help-enabling-technologies/ ABI Research – AR Cloud Promises High Value, Future Proof AR Use Cases]</ref> This shared digital layer would serve as a universal foundation for anchoring digital content, transforming today's isolated spatial anchors into the equivalent of hyperlinks on a "Spatial Web" or "Spatial Internet."<ref name="Niantic_Enterprise"/> In this paradigm, AR experiences would no longer be confined to individual apps but would exist as a persistent, collaborative, and globally-scaled information layer over reality.
| |
| | |
| Niantic characterizes the AR Cloud as "one of the most important infrastructures in the history of computing" and the current moment as a "land grab" similar to early internet—presence, visibility, and brand relevance in the physical world will define market positions.<ref name="Niantic_Enterprise"/>
| |
| | |
| === Enabling technologies ===
| |
| | |
| Realizing the full vision of the AR Cloud is a monumental technical undertaking that depends on the convergence of several key enabling technologies:
| |
| | |
| * '''[[5G]] and [[6G]]''': These next-generation wireless networks are essential for providing the high bandwidth and ultra-low latency required to stream and synchronize massive 3D map data between devices and the cloud in real time.<ref name="Future_Seisan"/><ref name="Future_ESPIN">[https://www.e-spincorp.com/ar-cloud-transforming-augmented-reality/ E-SPIN – AR Cloud: The Digital Twin Transforming Augmented Reality]</ref> | |
| * '''[[Edge computing]]''': To ensure responsive and immersive AR interactions, much of the heavy computational work, such as processing spatial data and rendering complex scenes, will need to be performed on local edge servers. This reduces the latency that would be incurred by sending data all the way to a centralized cloud.<ref name="Future_Seisan"/><ref name="Future_Gartner">[https://innovateenergynow.com/resources/ar-cloud InnovateEnergyNow – AR Cloud: The Future of Immersive Experiences]</ref>
| |
| * '''[[Artificial Intelligence]] and [[Computer Vision]]''': Advanced AI will be crucial for automatically processing, segmenting, and understanding the vast amounts of visual data needed to build and maintain the AR Cloud. AI will also power the contextual awareness of AR applications, allowing them to deliver relevant information based on what the user is seeing.<ref name="Future_MasterDC"/><ref name="Future_ESPIN"/>
| |
| | |
| '''Open AR Cloud Working Groups''' develop standards for privacy, security, and distribution of on-demand compute services, protocols between devices, edge computing, IoT, and cloud, with decentralization of data and control to prevent vendor lock-in. GeoPose/Geo6DoF standards establish spatial coordinate frames for consistent global reference.<ref name="OpenARCloud">[https://www.openarcloud.org/workinggroups/overview Open AR Cloud – Working Groups Overview]</ref>
| |
| | |
| === AI and spatial computing convergence ===
| |
| | |
| '''On-device machine learning''' in spatial computing enables Apple's Create ML to train object tracking models directly from command line—converting 3D model files through ML training for visionOS tracking. '''Multimodal AI''' bridges spatial and business data, processing text, image, audio, and spatial data types simultaneously.<ref name="Deloitte_Trends">[https://www2.deloitte.com/us/en/insights/focus/tech-trends/2025/tech-trends-future-of-spatial-computing.html Deloitte Tech Trends 2025 – Future of Spatial Computing]</ref>
| |
| | |
| '''Spatial AI''' at the intersection of spatial computing and AI/ML includes edge AI processors for local inference, spatial cameras and sensors for AR, hardware-in-the-loop simulation platforms, and geospatial data processing units. Deloitte Tech Trends 2025 predicts spatial computing will converge with '''agentic AI systems''' that are context-aware and capable of executing functions proactively, able to serve the right content at the right time without explicit commands, and multimodal while processing spatial, visual, and business data simultaneously.<ref name="Deloitte_Trends"/>
| |
| | |
| === Future applications ===
| |
| | |
| A fully realized AR Cloud could unlock transformative applications that are difficult to achieve today, such as city-scale AR navigation systems, persistent social media layers on reality, and globally synchronized collaborative workspaces.<ref name="Future_ABI"/> This future also raises profound new questions about digital governance and ownership. As virtual content becomes tied to physical locations and accrues real-world value, new systems will be needed to manage who owns a piece of virtual real estate or an AR asset anchored in a public space. Technologies like [[blockchain]] and [[non-fungible token]]s are being explored as potential solutions for managing the ownership, provenance, and rights of digital property in the AR Cloud.<ref name="Future_ESPIN"/>
| |
| | |
| The '''World Economic Forum (2025)''' argues "hardware is becoming the medium through which AI lives," with spatial computing, XR, and AI-powered devices forming infrastructure of the next industrial revolution. Tomorrow's AI systems will require depth, motion, object recognition, and environmental mapping capabilities that spatial anchors help provide.<ref name="WEF_Spatial">[https://www.weforum.org/stories/2025/04/spatial-computing-wearables-robots-ai-next-frontier/ World Economic Forum – Spatial Computing: The Next Frontier]</ref>
| |
| | |
| === Industry adoption outlook ===
| |
| | |
| While core technologies have reached production-ready state with proven global scale supporting millions of objects and cross-platform capabilities, '''primary barriers are organizational and strategic rather than technical''' according to Gartner's 2025 "Emerging Tech: The Future of Spatial Computing" report.<ref name="Gartner_Emerging">[https://www.gartner.com/en/documents/emerging-tech-spatial-computing Gartner – Emerging Tech: The Future of Spatial Computing Report (2025)]</ref>
| |
| | |
| Challenges include data integration and availability (spatial data historically poorly managed across organizations), platform and hardware fragmentation (diverse ecosystems requiring significant integration effort), unclear business models (difficulty quantifying ROI and value proposition), uncertain value proposition (enterprises struggle to identify compelling use cases), organizational alignment (unclear ownership of spatial computing initiatives), ROI measurement challenges (traditional KPIs don't capture spatial computing benefits), and platform dependency concerns (fear of vendor lock-in given rapid technology evolution).<ref name="Gartner_Emerging"/>
| |
| | |
| Strategic recommendations for enterprises include aligning teams across digital, product, and operations, partnering with vendors supporting open standards, defining KPIs tied to business outcomes, experimenting in controlled environments with clear scale paths, and beginning pilot use cases now to gain competitive advantage.<ref name="Gartner_Emerging"/>
| |
|
| |
|
| == See also == | | == See also == |
|
| |
| * [[Augmented reality]] | | * [[Augmented reality]] |
| * [[Mixed reality]] | | * [[Mixed reality]] |
| * [[Simultaneous localization and mapping]] | | * [[Simultaneous localization and mapping]] |
| * [[OpenXR]] | | * [[OpenXR]] |
| * [[Computer vision]]
| |
| * [[Microsoft HoloLens]] | | * [[Microsoft HoloLens]] |
| * [[ARKit]] | | * [[Magic Leap]] |
| * [[ARCore]] | | * [[Meta Quest]] |
| * [[Visual positioning system]]
| |
| * [[Virtual reality]]
| |
|
| |
|
| == References == | | == References == |
|
| |
| <references> | | <references> |
| <ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref> | | <ref name="MSLearnAnchors">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn — Spatial anchors (design guidance)]</ref> |
| <ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref>
| | <ref name="ARCoreConcepts">[https://developers.google.com/ar/concepts#coordinates_poses Google ARCore Concepts — Coordinates and poses]</ref> |
| <ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref>
| | <ref name="OpenXRMSFT">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XR_MSFT_spatial_anchor.html Khronos OpenXR — XR_MSFT_spatial_anchor]</ref> |
| <ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref>
| | <ref name="ARCoreAnchors">[https://developers.google.com/ar/develop/anchors Google ARCore — Working with Anchors]</ref> |
| <ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref>
| | <ref name="ARCoreGeospatialIOS">[https://developers.google.com/ar/develop/ios/geospatial/anchors Google ARCore — Use Geospatial anchors (iOS): WGS84, Terrain, Rooftop]</ref> |
| <ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref>
| | <ref name="ARKitARAnchor">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer — ARAnchor]</ref> |
| <ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref>
| | <ref name="ARKitWorldMap">[https://developer.apple.com/documentation/arkit/arworldmap Apple Developer — ARWorldMap]</ref> |
| <ref name="ARCoreConcepts">[https://developers.google.com/ar/reference/c/group/concepts Google ARCore – Concepts Documentation]</ref> | | <ref name="ARKitGeo">[https://developer.apple.com/documentation/arkit/argeoanchor Apple Developer — ARGeoAnchor (Location Anchors)]</ref> |
| <ref name="Reko3D">[https://reko3d.com/blog/spatial-anchors/ Reko3D XR Glossary – Spatial Anchors (2024)]</ref>
| | <ref name="ARKitTrackGeo">[https://developer.apple.com/documentation/ARKit/tracking-geographic-locations-in-ar Apple Developer — Tracking geographic locations in AR]</ref> |
| <ref name="JaklAnalysis">[https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/ Andreas Jakl – Basics of AR Anchors and Feature Detection]</ref>
| | <ref name="ARCoreCloudCodelab">[https://codelabs.developers.google.com/codelabs/arcore-cloud-anchors Google Codelab — Cloud Anchors; persistent 1–365 day TTL]</ref> |
| <ref name="ARCoreGeo">[https://developers.googleblog.com/en/make-the-world-your-canvas-with-the-arcore-geospatial-api Google Developers Blog – ARCore Geospatial API Announcement (2022)]</ref> | | <ref name="ARCoreCloudQuickstart">[https://developers.google.com/ar/develop/ios/cloud-anchors/quickstart Google ARCore — Cloud Anchors Quickstart (TTL up to 365 days)]</ref> |
| <ref name="NianticVPS">[https://lightship.dev/docs/ardk/3.6/features/lightship_vps/ Niantic Lightship VPS Documentation – Persistent Location Anchors]</ref>
| | <ref name="ARCoreWhatsNew">[https://developers.google.com/ar/whatsnew-arcore Google ARCore — What’s new (Persistent Cloud Anchors)]</ref> |
| <ref name="VSLAM_MDPI">[https://www.mdpi.com/1424-8220/24/4/1161 MDPI Sensors – Enhancing Outdoor Location-Based AR Anchors Using Visual SLAM]</ref> | | <ref name="MetaPersist">[https://developers.meta.com/horizon/documentation/native/android/openxr-lsa-persist-content/ Meta — Persist Content Across Sessions (OpenXR Spatial Anchors)]</ref> |
| <ref name="VIO_Research">[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712971/ PMC – Adaptive Monocular Visual-Inertial SLAM for Real-Time AR Applications]</ref> | | <ref name="XRFBSpatialEntity">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XR_FB_spatial_entity.html Khronos OpenXR — XR_FB_spatial_entity]</ref> |
| <ref name="DepthSensing">[https://www.slamcore.com/technology/ Slamcore – Next-Level Spatial Intelligence Technology]</ref>
| | <ref name="OpenXRPersist">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/xrPersistSpatialAnchorMSFT.html Khronos OpenXR — xrPersistSpatialAnchorMSFT (persistence)]</ref> |
| <ref name="MSLearn2">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors Usage Guidelines (2025)]</ref>
| | <ref name="MagicLeapAnchors">[https://developer-docs.magicleap.cloud/docs/guides/unity-openxr/spatial-anchors/unity-spatial-anchors-overview/ Magic Leap Developer Docs — Spatial Anchor Overview (ML2 OpenXR)]</ref> |
| <ref name="UnityAnchor">[https: //docs.unity3d.com/Packages/[email protected]/manual/anchor-manager.html Unity AR Foundation Documentation – ARAnchorManager]</ref> | | <ref name="ASALifecycle">[https://learn.microsoft.com/en-us/lifecycle/products/azure-mixed-reality-services Microsoft Lifecycle — Azure Mixed Reality services (Azure Spatial Anchors retired Nov 20, 2024)]</ref> |
| <ref name="HistoryVO">[https://rpg.ifi.uzh.ch/research_vo.html University of Zurich – Visual Odometry Research History]</ref> | | <ref name="UnityARFAnchors">[https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@3.0/manual/anchor-manager.html Unity Documentation — AR Foundation ARAnchorManager (anchors & considerations)]</ref> |
| <ref name="SLAMHistory">[https://onlinelibrary.wiley.com/doi/10.1155/2021/2054828 Wiley – Visual and Visual-Inertial SLAM: State of the Art]</ref>
| |
| <ref name="AzumaSurvey">[https://en.wikipedia.org/wiki/Augmented_reality Wikipedia – Augmented Reality History]</ref> | |
| <ref name="ARToolKit">[https://www.assemblrworld.com/blog/history-of-augmented-reality Assemblr – The History of Augmented Reality]</ref>
| |
| <ref name="TangoHistory">[https://en.wikipedia.org/wiki/Tango_(platform) Wikipedia – Google Tango Platform]</ref>
| |
| <ref name="ARCoreHistory">[https://developers.googleblog.com/2018/02/announcing-arcore-10-and-new-updates-to.html Google Developers Blog – ARCore 1.0 Launch Announcement]</ref> | |
| <ref name="ARKitHistory">[https://developer.apple.com/augmented-reality/arkit/ Apple Developer – ARKit Overview]</ref>
| |
| <ref name="ASA">[https://azure.microsoft.com/en-us/blog/announcing-azure-spatial-anchors-for-collaborative-cross-platform-mixed-reality-apps/ Microsoft Azure Blog – Announcing Azure Spatial Anchors (2019)]</ref>
| |
| <ref name="ASA_Sunset">[https://www.multiset.ai/post/azure-spatial-anchors-alternative MultiSet AI – Azure Spatial Anchors Alternative (2024)]</ref>
| |
| <ref name="MSLearn3">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Persisting and Sharing Spatial Anchors (2025)]</ref>
| |
| <ref name="MetaAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-basic-tutorial/ Meta Developers – Spatial Anchors Tutorial]</ref> | |
| <ref name="GoogleBlog2018">[https://developers.googleblog.com/2020/10/improving-shared-ar-experiences-with-cloud-anchors.html Google Developers Blog – Improving Shared AR Experiences with Cloud Anchors (2020)]</ref> | |
| <ref name="ARCoreCloud">[https://developers.google.com/ar/develop/java/cloud-anchors/quickstart ARCore Developer Guide – Cloud Anchors Quickstart]</ref> | |
| <ref name="Niantic_Enterprise">[https://www.nianticspatial.com/blog/spatial-anchors-enterprise-readiness Niantic Spatial – Spatial Anchors Enterprise Readiness (2025)]</ref> | |
| <ref name="Meta_SharedAnchors">[https://developers.meta.com/horizon/documentation/unity/unity-shared-spatial-anchors/ Meta for Developers – Shared Spatial Anchors]</ref> | |
| <ref name="MagicLeapSpaces">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Developer Docs – Spaces and Spatial Anchors]</ref> | |
| <ref name="ARKit_WorldMap">[https://developer.apple.com/documentation/arkit/arworldmap Apple Developer – ARWorldMap Documentation]</ref>
| |
| <ref name="Qualium_Challenges">[https://www.qualium-systems.com/blog/what-are-spatial-anchors-and-why-they-matter/ Qualium Systems – Spatial Anchors Challenges]</ref>
| |
| <ref name="ASA_Privacy">[https://azure.microsoft.com/en-us/blog/azure-spatial-anchors-privacy-and-security/ Microsoft Azure Blog – Azure Spatial Anchors Privacy and Security]</ref> | |
| <ref name="ARKitDocs">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer – ARAnchor Documentation]</ref>
| |
| <ref name="AppleLocation">[https://www.apple.com/newsroom/2020/06/ios-14-offers-new-features/ Apple Newsroom – ARKit 4 Location Anchors (2020)]</ref>
| |
| <ref name="ARCoreAnchors">[https://developers.google.com/ar/develop/anchors Google ARCore – Working with Anchors]</ref> | |
| <ref name="Reddit_Apps">[https://www.reddit.com/r/OculusQuest/comments/17fr7nc/apps_and_games_with_shared_spatial_anchors/ Reddit – Apps and Games with Shared Spatial Anchors]</ref>
| |
| <ref name="PokemonPlaygrounds">[https://nianticlabs.com/news/pokemon-playgrounds Niantic Labs – Pokemon Playgrounds Announcement]</ref>
| |
| <ref name="MinecraftEarth">[https://news.microsoft.com/features/minecraft-earth-azure-spatial-anchors/ Microsoft News – Minecraft Earth Technology]</ref>
| |
| <ref name="ShapesXR">[https://www.shapesxr.com/post/update-shadows-shared-spatial-anchors-copy-and-paste-between-spaces ShapesXR – Shared Spatial Anchors Update]</ref>
| |
| <ref name="IKEAPlace">[https://www.ikea.com/global/en/newsroom/innovation/ikea-launches-ikea-place IKEA – IKEA Place Launch Announcement]</ref>
| |
| <ref name="Qualium_UseCases">[https://www.qualium-systems.com/blog/what-are-spatial-anchors-and-why-they-matter/ Qualium Systems – Real-World Use Cases]</ref>
| |
| <ref name="CaseWestern">[https://news.microsoft.com/transform/case-western-reserve-pearson-hololens-spatial-anchors/ Microsoft News – Case Western Reserve and Pearson Education]</ref>
| |
| <ref name="XREAL_Limitations">[https://xreal.gitbook.io/nrsdk/development/spatial-anchor XREAL Developer Docs – Spatial Anchor Limitations]</ref>
| |
| <ref name="MagicLeapDrift">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Docs – Anchor Drift Mitigation]</ref>
| |
| <ref name="Meta_Drift">[https://communityforums.atmeta.com/discussions/dev-unity/spatial-anchors-issues Meta Community Forums – Spatial Anchors Issues]</ref> | |
| <ref name="MS_Sunset_Impact">[https://www.mdpi.com/2076-3417/15/13/6959 MDPI – Cross-Platform Framework for Synchronizing Spatial Anchors]</ref>
| |
| <ref name="SABIAT_Paper">[https://benswift.me/assets/documents/preprints/he_et_al_2021_spatial_anchor_based_indoor_asset_tracking.pdf CSIRO Research – Spatial-Anchor-Based Indoor Asset Tracking]</ref> | |
| <ref name="ARCore_Unity_Persistent">[https://docs.unity3d.com/Packages/com.unity.xr.arcore@6.2/manual/features/anchors/persistent-anchors.html Unity Documentation – Persistent Anchors for ARCore]</ref>
| |
| <ref name="Privacy_Data_Collection">[https://milvus.io/ai-quick-reference/what-are-the-privacy-concerns-related-to-ar-data-collection Milvus – Privacy Concerns Related to AR Data Collection]</ref>
| |
| <ref name="Privacy_Cloud">[https://pmc.ncbi.nlm.nih.gov/articles/PMC12301001/ PMC – Privacy-Preserving Local Spatial Anchors for Augmented Reality]</ref>
| |
| <ref name="Privacy_BystandAR">[https://news.vt.edu/articles/2024/01/new-tech-addresses-augmented-reality-s-privacy-problem.html Virginia Tech News – New Tech Addresses AR Privacy Problem (2024)]</ref>
| |
| <ref name="Privacy_EFF">[https://www.eff.org/deeplinks/2020/10/augmented-reality-must-have-augmented-privacy Electronic Frontier Foundation – Augmented Reality Must Have Augmented Privacy]</ref>
| |
| <ref name="Privacy_ITIF">[https://itif.org/publications/2021/03/04/balancing-user-privacy-and-innovation-augmented-and-virtual-reality/ ITIF – Balancing User Privacy and Innovation in AR/VR]</ref>
| |
| <ref name="Privacy_Biometric">[https://trustarc.com/resource/privacy-augmented-virtual-reality-platforms/ TrustArc – Data Privacy in Virtual and Augmented Reality Platforms]</ref>
| |
| <ref name="Future_Seisan">[https://seisan.com/the-infrastructure-behind-persistent-augmented-reality/ Seisan – The Infrastructure Behind Persistent Augmented Reality]</ref>
| |
| <ref name="Future_MasterDC">[https://www.masterdc.com/blog/augmented-reality-technology-cloud/ MasterDC – Augmented Reality Technology Will Be in the Cloud]</ref>
| |
| <ref name="Future_ABI">[https://www.abiresearch.com/press/ar-cloud-promises-high-value-future-proof-ar-use-cases-needs-help-enabling-technologies/ ABI Research – AR Cloud Promises High Value, Future Proof AR Use Cases]</ref>
| |
| <ref name="Future_ESPIN">[https://www.e-spincorp.com/ar-cloud-transforming-augmented-reality/ E-SPIN – AR Cloud: The Digital Twin Transforming Augmented Reality]</ref>
| |
| <ref name="Future_Gartner">[https://innovateenergynow.com/resources/ar-cloud InnovateEnergyNow – AR Cloud: The Future of Immersive Experiences]</ref>
| |
| <ref name="OpenARCloud">[https://www.openarcloud.org/workinggroups/overview Open AR Cloud – Working Groups Overview]</ref>
| |
| <ref name="Deloitte_Trends">[https://www2.deloitte.com/us/en/insights/focus/tech-trends/2025/tech-trends-future-of-spatial-computing.html Deloitte Tech Trends 2025 – Future of Spatial Computing]</ref>
| |
| <ref name="WEF_Spatial">[https://www.weforum.org/stories/2025/04/spatial-computing-wearables-robots-ai-next-frontier/ World Economic Forum – Spatial Computing: The Next Frontier]</ref>
| |
| <ref name="Gartner_Emerging">[https://www.gartner.com/en/documents/emerging-tech-spatial-computing Gartner – Emerging Tech: The Future of Spatial Computing Report (2025)]</ref>
| |
| </references> | | </references> |
|
| |
| [[Category:Terms]]
| |