Spatial anchors: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
| (7 intermediate revisions by the same user not shown) | |||
| Line 5: | Line 5: | ||
| used_by = [[ARKit]], [[ARCore]], [[Microsoft HoloLens]], [[Magic Leap]], [[Meta Quest]], [[OpenXR]] | | used_by = [[ARKit]], [[ARCore]], [[Microsoft HoloLens]], [[Magic Leap]], [[Meta Quest]], [[OpenXR]] | ||
}} | }} | ||
[[File:Spatial anchors1.png|right|300px]] | |||
[[File:Spatial anchors2.png|right|300px]] | |||
[[File:Spatial anchors4.png|right|300px]] | |||
'''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''[[6 degrees of freedom]] (6DOF)''' representation-three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''[[world-locked]]''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref> | '''Spatial anchors''' are persistent reference points in the real world that [[augmented reality]] (AR) and [[mixed reality]] (MR) systems use to lock virtual objects to a fixed location in physical space.<ref name="MagicLeap">[https://developer-docs.magicleap.cloud/docs/guides/unity/perception/anchors/spatial-anchors-overview/ Magic Leap Developer Docs – Spatial Anchors Overview (2025)]</ref><ref name="MSLearn">[https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors Microsoft Learn – Spatial Anchors (2025)]</ref> A spatial anchor establishes a world-locked frame of reference that defines a point in the environment with a unique coordinate frame, capturing a complete '''[[6 degrees of freedom]] (6DOF)''' representation-three translational coordinates (X, Y, Z position) and three rotational coordinates (pitch, yaw, roll orientation).<ref name="OpenXR">[https://registry.khronos.org/OpenXR/specs/1.1/man/html/XrSpatialAnchorMSFT.html Khronos OpenXR – XR_MSFT_spatial_anchor Extension Specification]</ref><ref name="BrownWiki">[https://www.vrwiki.cs.brown.edu/vr-development-software/unity/spatial-anchors VR Software Wiki – Spatial Anchors in Unity]</ref> The AR device continuously tracks this anchor over time, so that any digital content attached to it remains accurately '''[[world-locked]]''' (tied to a real-world position and orientation) rather than floating or drifting as the user moves.<ref name="ARKitAnchor">[https://www.captechconsulting.com/blogs/visualizing-surfaces-detected-by-arkit CapTech Consulting – ARAnchor ARKit Overview (2019)]</ref> | ||
| Line 10: | Line 14: | ||
By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''[[drift]]'''-where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref> | By rendering virtual objects relative to a spatial anchor's coordinate system, those objects appear fixed in the real world with minimal drift or deviation, even as the user changes their viewpoint or returns to the scene later.<ref name="MagicLeap"/><ref name="OpenXR"/> This capability is essential for creating believable and immersive experiences where digital elements appear to be a natural part of the user's surroundings, solving the fundamental AR problem of '''[[drift]]'''-where virtual objects can appear to float away from their intended positions as the system's understanding of the environment updates.<ref name="RecreateFAQ">[https://recreate.nl/faq-items/what-is-a-spatial-anchor/ Recreate – What is a spatial anchor?]</ref> | ||
Spatial anchors enable three critical features in AR/MR applications: '''[[stability]]''' (ensuring virtual content stays precisely fixed in place), '''[[persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''collaboration''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref> | Spatial anchors enable three critical features in AR/MR applications: '''[[stability]]''' (ensuring virtual content stays precisely fixed in place), '''[[persistence]]''' (allowing virtual content to be saved and reloaded across different sessions), and '''[[collaboration]]''' (enabling multiple users and devices to share a common frame of reference for co-located, multi-user experiences).<ref name="MSLearn"/><ref name="MetaDesignAnchors">[https://developers.meta.com/horizon/design/mr-design-spatial-anchors/ Meta for Developers – Spatial Anchors Design]</ref> | ||
== Definition and core principles == | == Definition and core principles == | ||
| Line 124: | Line 128: | ||
=== Apple ARKit === | === Apple ARKit === | ||
ARKit represents anchors with the `ARAnchor` class, which Apple defines as "a real-world position and orientation that can be used for placing objects in an AR scene."<ref name="ARKitAnchor"/> ARKit provides '''10+ specific anchor types''', all inheriting from the base ARAnchor class: '''ARAnchor''' (base class for position and orientation), '''ARPlaneAnchor''' (horizontal and vertical surfaces with semantic classification), '''ARImageAnchor''' (tracked images with scale estimation), '''ARObjectAnchor''' (real-world 3D objects), '''ARBodyAnchor''' (human body positions), '''ARFaceAnchor''' (facial tracking), '''ARGeoAnchor''' (geographic locations using GPS and visual positioning, ARKit 4.0+), '''ARMeshAnchor''' (polygonal mesh geometry using LiDAR, ARKit 3.5+), '''ARParticipantAnchor''' (multi-user AR experiences), '''AREnvironmentProbeAnchor''' (environmental lighting), and '''ARAppClipCodeAnchor''' (App Clip Code tracking, ARKit 4.0+).<ref name="ARKitDocs">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer – ARAnchor Documentation]</ref> | [[ARKit]] represents anchors with the `ARAnchor` class, which Apple defines as "a real-world position and orientation that can be used for placing objects in an AR scene."<ref name="ARKitAnchor"/> ARKit provides '''10+ specific anchor types''', all inheriting from the base ARAnchor class: '''ARAnchor''' (base class for position and orientation), '''ARPlaneAnchor''' (horizontal and vertical surfaces with semantic classification), '''ARImageAnchor''' (tracked images with scale estimation), '''ARObjectAnchor''' (real-world 3D objects), '''ARBodyAnchor''' (human body positions), '''ARFaceAnchor''' (facial tracking), '''ARGeoAnchor''' (geographic locations using GPS and visual positioning, ARKit 4.0+), '''ARMeshAnchor''' (polygonal mesh geometry using LiDAR, ARKit 3.5+), '''ARParticipantAnchor''' (multi-user AR experiences), '''AREnvironmentProbeAnchor''' (environmental lighting), and '''ARAppClipCodeAnchor''' (App Clip Code tracking, ARKit 4.0+).<ref name="ARKitDocs">[https://developer.apple.com/documentation/arkit/aranchor Apple Developer – ARAnchor Documentation]</ref> | ||
ARKit automatically generates some anchors (for example, when planar surfaces or images are detected, it creates plane anchors or image anchors), and developers can add their own anchors at arbitrary positions. ARKit does not have a built-in cloud anchor service, but it allows multi-user sharing by merging AR sessions: an app can share a map (which contains anchors) with another device to synchronize their coordinate space. In ARKit 4, Apple introduced '''Location Anchors (ARGeoAnchor)''', which use high-resolution Apple Maps data in certain cities to anchor content to real-world coordinates (latitude, longitude, altitude)-enabling experiences where AR content is tied to specific landmarks or locations.<ref name="AppleLocation">[https://www.apple.com/newsroom/2020/06/ios-14-offers-new-features/ Apple Newsroom – ARKit 4 Location Anchors (2020)]</ref> | ARKit automatically generates some anchors (for example, when planar surfaces or images are detected, it creates plane anchors or image anchors), and developers can add their own anchors at arbitrary positions. ARKit does not have a built-in cloud anchor service, but it allows multi-user sharing by merging AR sessions: an app can share a map (which contains anchors) with another device to synchronize their coordinate space. In ARKit 4, Apple introduced '''Location Anchors (ARGeoAnchor)''', which use high-resolution Apple Maps data in certain cities to anchor content to real-world coordinates (latitude, longitude, altitude)-enabling experiences where AR content is tied to specific landmarks or locations.<ref name="AppleLocation">[https://www.apple.com/newsroom/2020/06/ios-14-offers-new-features/ Apple Newsroom – ARKit 4 Location Anchors (2020)]</ref> | ||
| Line 132: | Line 136: | ||
=== Google ARCore === | === Google ARCore === | ||
ARCore provides an `Anchor` class in its API for locking a virtual object's position. ARCore anchors are often created after a '''hit test''' (raycast) against the environment or attached to detected plane surfaces. ARCore provides four anchor types: '''Local Anchors''' (stored locally, valid for single app instance), '''Cloud Anchors''' (hosted in Google Cloud, shareable between devices/users), '''Persistent Cloud Anchors''' (ARCore 1.20+, configurable lifetime from 1 to 365 days), and '''Geospatial Anchors''' (based on GPS coordinates plus Visual Positioning System).<ref name="ARCoreAnchors">[https://developers.google.com/ar/develop/anchors Google ARCore – Working with Anchors]</ref> | [[ARCore]] provides an `Anchor` class in its API for locking a virtual object's position. ARCore anchors are often created after a '''hit test''' (raycast) against the environment or attached to detected plane surfaces. ARCore provides four anchor types: '''Local Anchors''' (stored locally, valid for single app instance), '''Cloud Anchors''' (hosted in Google Cloud, shareable between devices/users), '''Persistent Cloud Anchors''' (ARCore 1.20+, configurable lifetime from 1 to 365 days), and '''Geospatial Anchors''' (based on GPS coordinates plus Visual Positioning System).<ref name="ARCoreAnchors">[https://developers.google.com/ar/develop/anchors Google ARCore – Working with Anchors]</ref> | ||
In addition to local anchors, ARCore's Cloud Anchors (and persistent Cloud Anchors) enable saving and sharing anchors via Google's cloud. ARCore also offers geospatial anchors through the Geospatial API, using global VPS; developers can create a WGS84 anchor by specifying a latitude, longitude, and altitude, and ARCore will align it using localization from satellite imagery and Street View data.<ref name="ARCoreGeo"/> | In addition to local anchors, ARCore's Cloud Anchors (and persistent Cloud Anchors) enable saving and sharing anchors via Google's cloud. ARCore also offers geospatial anchors through the Geospatial API, using global VPS; developers can create a WGS84 anchor by specifying a latitude, longitude, and altitude, and ARCore will align it using localization from satellite imagery and Street View data.<ref name="ARCoreGeo"/> | ||
| Line 140: | Line 144: | ||
=== Microsoft Windows Mixed Reality === | === Microsoft Windows Mixed Reality === | ||
On HoloLens and other WMR devices, spatial anchors (sometimes called ''world anchors'') are used to keep holograms fixed in place. The device continuously refines the anchors using its spatial mapping and tracking system. Microsoft provided the '''Azure Spatial Anchors''' cloud service for cross-platform persistence: developers could create an anchor on a HoloLens (as a `CloudSpatialAnchor` in the SDK), upload it to Azure, then locate that same anchor on an iPhone or Android device, allowing a shared experience.<ref name="ASA"/> | On [[HoloLens]] and other WMR devices, spatial anchors (sometimes called ''world anchors'') are used to keep holograms fixed in place. The device continuously refines the anchors using its spatial mapping and tracking system. Microsoft provided the '''Azure Spatial Anchors''' cloud service for cross-platform persistence: developers could create an anchor on a HoloLens (as a `CloudSpatialAnchor` in the SDK), upload it to Azure, then locate that same anchor on an iPhone or Android device, allowing a shared experience.<ref name="ASA"/> | ||
Locally, HoloLens also allowed saving anchors to disk and exporting/importing them (via a WorldAnchorTransferBatch in Unity) to persist holograms between app runs.<ref name="MSLearn3"/> The HoloLens system has a limit on how far content can be from an anchor before stability decreases-Microsoft recommends keeping holograms within a few meters of their anchor for best accuracy, specifically noting that holograms greater than 3 meters from anchor origin experience noticeable positional errors.<ref name="MSLearn2"/> | Locally, HoloLens also allowed saving anchors to disk and exporting/importing them (via a WorldAnchorTransferBatch in Unity) to persist holograms between app runs.<ref name="MSLearn3"/> The HoloLens system has a limit on how far content can be from an anchor before stability decreases-Microsoft recommends keeping holograms within a few meters of their anchor for best accuracy, specifically noting that holograms greater than 3 meters from anchor origin experience noticeable positional errors.<ref name="MSLearn2"/> | ||
| Line 148: | Line 152: | ||
=== Meta (Oculus) Quest === | === Meta (Oculus) Quest === | ||
Meta's mixed reality SDK supports spatial anchors for its VR/AR headsets. Developers using the Meta Oculus SDK or OpenXR on Quest devices can create anchors (for example with the `OVRSpatialAnchor` in Unity) to persist virtual objects in a room. These anchors can be saved to the device's storage so that content remains in place across sessions (Quest headsets can store a certain number of anchors in their local memory).<ref name="MetaAnchors"/> | [[Meta]]'s mixed reality SDK supports spatial anchors for its VR/AR headsets. Developers using the Meta Oculus SDK or OpenXR on Quest devices can create anchors (for example with the `OVRSpatialAnchor` in Unity) to persist virtual objects in a room. These anchors can be saved to the device's storage so that content remains in place across sessions (Quest headsets can store a certain number of anchors in their local memory).<ref name="MetaAnchors"/> | ||
Meta provides a '''Shared Spatial Anchors''' feature that lets devices in proximity share anchor data with each other for co-located multiplayer experiences (two users in the same room see the same virtual object anchored on a table). In the Meta framework, an anchor is described as a "world-locked frame of reference" for content, underscoring that it maintains its position in the real world rather than moving with the user.<ref name="MetaAnchors"/> Meta supports two sharing models: an older user-based model that requires Oculus User IDs and a newer, recommended group-based model that uses a shared UUID to simplify the sharing process.<ref name="Meta_SharedAnchors"/> | Meta provides a '''[[Shared Spatial Anchors]]''' feature that lets devices in proximity share anchor data with each other for co-located multiplayer experiences (two users in the same room see the same virtual object anchored on a table). In the Meta framework, an anchor is described as a "world-locked frame of reference" for content, underscoring that it maintains its position in the real world rather than moving with the user.<ref name="MetaAnchors"/> Meta supports two sharing models: an older user-based model that requires Oculus User IDs and a newer, recommended group-based model that uses a shared UUID to simplify the sharing process.<ref name="Meta_SharedAnchors"/> | ||
The `OVRSpatialAnchor` component in Unity enables creating anchors at specific positions, saving anchors locally or to Meta servers, loading previously saved anchors, erasing anchors when no longer needed, and querying for nearby anchors. Configuration requires Anchor Support enabled, Shared Spatial Anchor Support set to Supported, and Enhanced Spatial Services enabled in device settings.<ref name="MetaAnchors"/> | The `OVRSpatialAnchor` component in Unity enables creating anchors at specific positions, saving anchors locally or to Meta servers, loading previously saved anchors, erasing anchors when no longer needed, and querying for nearby anchors. Configuration requires Anchor Support enabled, Shared Spatial Anchor Support set to Supported, and Enhanced Spatial Services enabled in device settings.<ref name="MetaAnchors"/> | ||
| Line 156: | Line 160: | ||
=== Magic Leap === | === Magic Leap === | ||
Magic Leap 1 and 2 devices use spatial anchors as part of their Space mapping. A Magic Leap user scans an environment to establish a Space (a map of the area); within that space, anchors can be placed to mark where virtual content should appear. Magic Leap 2 supports up to 5 local Spaces stored on the device, allowing a user to save multiple environments with their anchors.<ref name="MagicLeap"/> | [[Magic Leap]] 1 and 2 devices use spatial anchors as part of their Space mapping. A Magic Leap user scans an environment to establish a Space (a map of the area); within that space, anchors can be placed to mark where virtual content should appear. Magic Leap 2 supports up to 5 local Spaces stored on the device, allowing a user to save multiple environments with their anchors.<ref name="MagicLeap"/> | ||
When a user returns to a mapped Space, the device can localize itself and restore all anchors (and content) in that space. The platform uses a concept called "Spaces," which are persistent, scanned 3D maps of an environment. Within these Spaces, developers can create spatial anchors that can be stored either locally on the device or in the AR Cloud.<ref name="MagicLeapSpaces">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Developer Docs – Spaces and Spatial Anchors]</ref> While Magic Leap's earlier cloud service for sharing anchors (Cloud Anchor / AR Cloud) was phased out, developers can integrate third-party services (or use Magic Leap's local networking) for multi-user scenarios if needed. | When a user returns to a mapped Space, the device can localize itself and restore all anchors (and content) in that space. The platform uses a concept called "Spaces," which are persistent, scanned 3D maps of an environment. Within these Spaces, developers can create spatial anchors that can be stored either locally on the device or in the AR Cloud.<ref name="MagicLeapSpaces">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Developer Docs – Spaces and Spatial Anchors]</ref> While Magic Leap's earlier cloud service for sharing anchors (Cloud Anchor / AR Cloud) was phased out, developers can integrate third-party services (or use Magic Leap's local networking) for multi-user scenarios if needed. | ||
| Line 162: | Line 166: | ||
=== Unity AR Foundation === | === Unity AR Foundation === | ||
Unity AR Foundation provides a '''cross-platform abstraction''' for spatial anchors across ARKit (iOS), ARCore (Android), Magic Leap, Meta Quest, and HoloLens (via OpenXR). The `ARAnchor` component can be added via `AddComponent<ARAnchor>()` on any GameObject, with transforms automatically updated by AR Foundation.<ref name="UnityAnchor"/> Manual transform changes are not recommended as anchors enter a pending state before full tracking is established, queryable via the `ARAnchor.pending` property. | [[Unity AR Foundation]] provides a '''cross-platform abstraction''' for spatial anchors across ARKit (iOS), ARCore (Android), Magic Leap, Meta Quest, and HoloLens (via OpenXR). The `ARAnchor` component can be added via `AddComponent<ARAnchor>()` on any GameObject, with transforms automatically updated by AR Foundation.<ref name="UnityAnchor"/> Manual transform changes are not recommended as anchors enter a pending state before full tracking is established, queryable via the `ARAnchor.pending` property. | ||
The `ARAnchorManager` manages the lifecycle of all anchors in the scene, providing an `anchorsChanged` event reporting added, updated, and removed anchors. The `AttachAnchor(ARPlane plane, Pose pose)` method attaches anchors to detected planes. The system translates between Unity world space and AR session space, compensating for tracking loss and session reinitialization.<ref name="UnityAnchor"/> | The `ARAnchorManager` manages the lifecycle of all anchors in the scene, providing an `anchorsChanged` event reporting added, updated, and removed anchors. The `AttachAnchor(ARPlane plane, Pose pose)` method attaches anchors to detected planes. The system translates between Unity world space and AR session space, compensating for tracking loss and session reinitialization.<ref name="UnityAnchor"/> | ||
| Line 219: | Line 223: | ||
=== Gaming and entertainment === | === Gaming and entertainment === | ||
'''Multiplayer gaming''' uses spatial anchors for co-located multiplayer games, where players in the same physical room can interact with a shared virtual world. In titles like '''Demeo''', a virtual game board is anchored to a physical table, allowing multiple players to see and interact with the same game state from their unique perspectives.<ref name="MetaDesignAnchors"/> Other notable examples include the robot battle game '''BAM''' and the AR dueling game '''Saber City'''.<ref name="Reddit_Apps">[https://www.reddit.com/r/OculusQuest/comments/17fr7nc/apps_and_games_with_shared_spatial_anchors/ Reddit – Apps and Games with Shared Spatial Anchors]</ref> | '''Multiplayer gaming''' uses spatial anchors for co-located multiplayer games, where players in the same physical room can interact with a shared virtual world. In titles like '''[[Demeo]]''', a virtual game board is anchored to a physical table, allowing multiple players to see and interact with the same game state from their unique perspectives.<ref name="MetaDesignAnchors"/> Other notable examples include the robot battle game '''[[BAM]]''' and the AR dueling game '''[[Saber City]]'''.<ref name="Reddit_Apps">[https://www.reddit.com/r/OculusQuest/comments/17fr7nc/apps_and_games_with_shared_spatial_anchors/ Reddit – Apps and Games with Shared Spatial Anchors]</ref> | ||
'''Pokemon Go''' by Niantic uses AR+ Mode with ARCore/ARKit to anchor Pokemon to physical locations with centimeter accuracy through their Visual Positioning System. The '''Pokemon Playgrounds''' feature enables shared persistent AR experiences at PokéStops and Gyms where trainers can place Pokemon for others to discover. With 91+ million active players and 176+ million copies sold, Pokemon Go demonstrated the viability of world-scale AR gaming.<ref name="PokemonPlaygrounds">[https://nianticlabs.com/news/pokemon-playgrounds Niantic Labs – Pokemon Playgrounds Announcement]</ref> | '''[[Pokemon Go]]''' by Niantic uses AR+ Mode with ARCore/ARKit to anchor Pokemon to physical locations with centimeter accuracy through their Visual Positioning System. The '''Pokemon Playgrounds''' feature enables shared persistent AR experiences at PokéStops and Gyms where trainers can place Pokemon for others to discover. With 91+ million active players and 176+ million copies sold, Pokemon Go demonstrated the viability of world-scale AR gaming.<ref name="PokemonPlaygrounds">[https://nianticlabs.com/news/pokemon-playgrounds Niantic Labs – Pokemon Playgrounds Announcement]</ref> | ||
'''Minecraft Earth''' utilized Azure Spatial Anchors and PlayFab integration to create life-size AR experiences, allowing players to build and share persistent structures in the real world. The game featured "Adventures"-small slices of Minecraft worlds rendered in life-size AR on sidewalks and parks. Though later retired, it demonstrated the technical feasibility of large-scale collaborative AR gaming.<ref name="MinecraftEarth">[https://news.microsoft.com/features/minecraft-earth-azure-spatial-anchors/ Microsoft News – Minecraft Earth Technology]</ref> | '''[[Minecraft Earth]]''' utilized Azure Spatial Anchors and PlayFab integration to create life-size AR experiences, allowing players to build and share persistent structures in the real world. The game featured "Adventures"-small slices of Minecraft worlds rendered in life-size AR on sidewalks and parks. Though later retired, it demonstrated the technical feasibility of large-scale collaborative AR gaming.<ref name="MinecraftEarth">[https://news.microsoft.com/features/minecraft-earth-azure-spatial-anchors/ Microsoft News – Minecraft Earth Technology]</ref> | ||
=== Collaborative design and productivity === | === Collaborative design and productivity === | ||
Applications such as '''Spatial''' and '''Arkio''' leverage shared anchors to create a common virtual space where multiple users can co-create, manipulate, and review 3D models and designs in real-time. This effectively turns any room into a collaborative digital studio, enhancing creative and professional workflows.<ref name="Reddit_Apps"/> '''ShapesXR''' implements real-time co-building with shadows and copy-paste functionality between spaces using shared spatial anchors.<ref name="ShapesXR">[https://www.shapesxr.com/post/update-shadows-shared-spatial-anchors-copy-and-paste-between-spaces ShapesXR – Shared Spatial Anchors Update]</ref> | Applications such as '''[[Spatial]]''' and '''[[Arkio]]''' leverage shared anchors to create a common virtual space where multiple users can co-create, manipulate, and review 3D models and designs in real-time. This effectively turns any room into a collaborative digital studio, enhancing creative and professional workflows.<ref name="Reddit_Apps"/> '''[[ShapesXR]]''' implements real-time co-building with shadows and copy-paste functionality between spaces using shared spatial anchors.<ref name="ShapesXR">[https://www.shapesxr.com/post/update-shadows-shared-spatial-anchors-copy-and-paste-between-spaces ShapesXR – Shared Spatial Anchors Update]</ref> | ||
=== Retail and commerce === | === Retail and commerce === | ||
'''IKEA Place''' launched in September 2017 as one of the first major ARKit implementations, featuring 2,000+ true-to-scale 3D furniture models with 98% accuracy. The app uses spatial anchors to place virtual furniture persistently in homes with updated features including multi-placement (place multiple items simultaneously), room sets (experience entire rooms with handpicked furniture), visual search (point camera at furniture to find similar IKEA products), and wishlist and sharing capabilities.<ref name="IKEAPlace">[https://www.ikea.com/global/en/newsroom/innovation/ikea-launches-ikea-place IKEA – IKEA Place Launch Announcement]</ref> | '''[[IKEA Place]]''' launched in September 2017 as one of the first major ARKit implementations, featuring 2,000+ true-to-scale 3D furniture models with 98% accuracy. The app uses spatial anchors to place virtual furniture persistently in homes with updated features including multi-placement (place multiple items simultaneously), room sets (experience entire rooms with handpicked furniture), visual search (point camera at furniture to find similar IKEA products), and wishlist and sharing capabilities.<ref name="IKEAPlace">[https://www.ikea.com/global/en/newsroom/innovation/ikea-launches-ikea-place IKEA – IKEA Place Launch Announcement]</ref> | ||
Built in just seven weeks, IKEA Place uses 3D renders from existing catalogs. Michael Valdsgaard (Digital Transformation Leader) commented: "Augmented reality and virtual reality will be a total game changer for retail in the same way as the internet. Only this time, much faster." The application demonstrates how spatial anchors enable consumers to visualize products in their actual spaces before purchase, reducing return rates and improving purchase confidence.<ref name="IKEAPlace"/> | Built in just seven weeks, IKEA Place uses 3D renders from existing catalogs. Michael Valdsgaard (Digital Transformation Leader) commented: "Augmented reality and virtual reality will be a total game changer for retail in the same way as the internet. Only this time, much faster." The application demonstrates how spatial anchors enable consumers to visualize products in their actual spaces before purchase, reducing return rates and improving purchase confidence.<ref name="IKEAPlace"/> | ||
| Line 239: | Line 243: | ||
=== Industrial and enterprise === | === Industrial and enterprise === | ||
'''Remote assistance and maintenance:''' In industrial settings, an on-site technician wearing an AR headset can share their view with a remote expert. The expert can then place spatially anchored instructions, diagrams, or annotations directly onto the real-world machinery. '''ThyssenKrupp Elevator Service''' uses HoloLens with Azure Spatial Anchors, enabling remote experts to mark up machinery with virtual annotations visible to on-site technicians. This implementation reduced maintenance time by approximately 30%. Technicians see instructions anchored to specific machine parts, reducing errors and improving first-time fix rates.<ref name="Qualium_UseCases"/> | '''[[Remote assistance and maintenance]]:''' In industrial settings, an on-site technician wearing an AR headset can share their view with a remote expert. The expert can then place spatially anchored instructions, diagrams, or annotations directly onto the real-world machinery. '''ThyssenKrupp Elevator Service''' uses HoloLens with Azure Spatial Anchors, enabling remote experts to mark up machinery with virtual annotations visible to on-site technicians. This implementation reduced maintenance time by approximately 30%. Technicians see instructions anchored to specific machine parts, reducing errors and improving first-time fix rates.<ref name="Qualium_UseCases"/> | ||
'''Warehouse logistics:''' '''Honeywell Connected Plant''' projects virtual arrows onto warehouse floors for optimized picking paths, improving order picking speed by approximately 25%. The persistence of spatial anchors ensures arrows remain accurate across shifts, and the solution has been deployed across multiple warehouse locations demonstrating scalability.<ref name="Qualium_UseCases"/> | '''[[Warehouse logistics]]:''' '''[[Honeywell Connected Plant]]''' projects virtual arrows onto warehouse floors for optimized picking paths, improving order picking speed by approximately 25%. The persistence of spatial anchors ensures arrows remain accurate across shifts, and the solution has been deployed across multiple warehouse locations demonstrating scalability.<ref name="Qualium_UseCases"/> | ||
'''Worker training:''' Complex procedures can be taught more effectively by anchoring step-by-step holographic instructions to specific parts of a machine or workspace. This allows trainees to learn in a hands-on, contextually relevant manner without risk to live equipment. Factory floor visualization enables workers to visualize machine status, navigate facilities, and access real-time IoT data overlaid on equipment.<ref name="Qualium_UseCases"/> | '''[[Worker training]]:''' Complex procedures can be taught more effectively by anchoring step-by-step holographic instructions to specific parts of a machine or workspace. This allows trainees to learn in a hands-on, contextually relevant manner without risk to live equipment. Factory floor visualization enables workers to visualize machine status, navigate facilities, and access real-time IoT data overlaid on equipment.<ref name="Qualium_UseCases"/> | ||
=== Healthcare and medical education === | === Healthcare and medical education === | ||
'''Pearson Education''' uses nursing students and professors practicing diagnosing and treating virtual patients in 3D real-world settings using HoloLens and mobile devices with Azure Spatial Anchors cross-platform support. Jeff Mlakar from Case Western Reserve University stated: "We can reach more students, educators and families by uniting our experiences across mobile and HoloLens devices...With Spatial Anchors' cross-platform support, we can bring our curriculum to life in 3D and share it with everyone."<ref name="CaseWestern">[https://news.microsoft.com/transform/case-western-reserve-pearson-hololens-spatial-anchors/ Microsoft News – Case Western Reserve and Pearson Education]</ref> | '''[[Pearson Education]]''' uses nursing students and professors practicing diagnosing and treating virtual patients in 3D real-world settings using HoloLens and mobile devices with Azure Spatial Anchors cross-platform support. Jeff Mlakar from Case Western Reserve University stated: "We can reach more students, educators and families by uniting our experiences across mobile and HoloLens devices...With Spatial Anchors' cross-platform support, we can bring our curriculum to life in 3D and share it with everyone."<ref name="CaseWestern">[https://news.microsoft.com/transform/case-western-reserve-pearson-hololens-spatial-anchors/ Microsoft News – Case Western Reserve and Pearson Education]</ref> | ||
VR platforms convert MRI/CT DICOM stacks into interactive 3D reconstructions for surgical planning, enabling pre-surgical rehearsals and multi-disciplinary team reviews. Surgical AR navigation provides intraoperative decision support with metric-accurate volumetric models and AI-driven segmentation for precise tool guidance. Medical training platforms use virtual reality training for emergency medical procedures, with spatial anchors enabling consistent placement of training scenarios.<ref name="Qualium_UseCases"/> | VR platforms convert MRI/CT DICOM stacks into interactive 3D reconstructions for surgical planning, enabling pre-surgical rehearsals and multi-disciplinary team reviews. Surgical AR navigation provides intraoperative decision support with metric-accurate volumetric models and AI-driven segmentation for precise tool guidance. Medical training platforms use virtual reality training for emergency medical procedures, with spatial anchors enabling consistent placement of training scenarios.<ref name="Qualium_UseCases"/> | ||
| Line 253: | Line 257: | ||
=== Architecture and construction === | === Architecture and construction === | ||
Architecture and construction firms use spatial anchors for '''design review''' where architects and site workers review building plans overlaid on construction sites. Spatial planning enables visualization of proposed structures in real-world context, while progress tracking compares planned versus actual construction with persistent anchors marking key reference points.<ref name="RecreateFAQ"/> | Architecture and construction firms use spatial anchors for '''[[design review]]''' where architects and site workers review building plans overlaid on construction sites. Spatial planning enables visualization of proposed structures in real-world context, while progress tracking compares planned versus actual construction with persistent anchors marking key reference points.<ref name="RecreateFAQ"/> | ||
Theatre set design uses Azure Object Anchors to identify objects (couches, props) and Azure Spatial Anchors to map stage locations for multi-scene prop placement. Museums and exhibits implement interactive exhibits with persistent holographic content, and smart city infrastructure deploys persistent AR overlays for navigation, information displays, and public services.<ref name="Qualium_UseCases"/> | Theatre set design uses Azure Object Anchors to identify objects (couches, props) and Azure Spatial Anchors to map stage locations for multi-scene prop placement. Museums and exhibits implement interactive exhibits with persistent holographic content, and smart city infrastructure deploys persistent AR overlays for navigation, information displays, and public services.<ref name="Qualium_UseCases"/> | ||
| Line 261: | Line 265: | ||
In education, spatial anchors enable persistent educational content across classrooms and campuses with 3D curriculum visualization. Students explore complex subjects through 3D visualizations anchored to physical spaces, and multiple students can work on shared holographic content simultaneously in collaborative projects.<ref name="RecreateFAQ"/> | In education, spatial anchors enable persistent educational content across classrooms and campuses with 3D curriculum visualization. Students explore complex subjects through 3D visualizations anchored to physical spaces, and multiple students can work on shared holographic content simultaneously in collaborative projects.<ref name="RecreateFAQ"/> | ||
'''Indoor navigation:''' In large, complex venues such as airports, museums, or train stations, where GPS is unreliable, spatial anchors can be used to create persistent, turn-by-turn AR navigation paths. These paths can guide visitors directly to their gate, exhibit, or platform, enhancing the visitor experience.<ref name="RecreateFAQ"/> | '''[[Indoor navigation]]:''' In large, complex venues such as airports, museums, or train stations, where GPS is unreliable, spatial anchors can be used to create persistent, turn-by-turn AR navigation paths. These paths can guide visitors directly to their gate, exhibit, or platform, enhancing the visitor experience.<ref name="RecreateFAQ"/> | ||
'''Interactive museum exhibits:''' Museums can use spatial anchors to overlay historical information, 3D reconstructions of artifacts, or interactive animations directly onto their physical displays. This provides visitors with a richer, more engaging, and contextually layered educational experience.<ref name="RecreateFAQ"/> | '''[[Interactive museum exhibits]]:''' Museums can use spatial anchors to overlay historical information, 3D reconstructions of artifacts, or interactive animations directly onto their physical displays. This provides visitors with a richer, more engaging, and contextually layered educational experience.<ref name="RecreateFAQ"/> | ||
== Technical challenges and limitations == | == Technical challenges and limitations == | ||
| Line 271: | Line 275: | ||
=== Environmental constraints === | === Environmental constraints === | ||
'''Feature-scarce environments''' present the primary challenge-empty white walls, uniform floors, and large glass areas lack visual features for anchor creation. Anchors fail to create or match reliably in these conditions. The practical impact is severe in modern office buildings with minimalist design.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations">[https://xreal.gitbook.io/nrsdk/development/spatial-anchor XREAL Developer Docs – Spatial Anchor Limitations]</ref> | '''[[Feature-scarce environments]]''' present the primary challenge-empty white walls, uniform floors, and large glass areas lack visual features for anchor creation. Anchors fail to create or match reliably in these conditions. The practical impact is severe in modern office buildings with minimalist design.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations">[https://xreal.gitbook.io/nrsdk/development/spatial-anchor XREAL Developer Docs – Spatial Anchor Limitations]</ref> | ||
'''Lighting conditions''' cause disruptions when abrupt changes occur (lights turning off/on, moving between dark and bright areas). Anchors may "jump" or temporarily disappear during adjustment. Documentation recommends even lighting and avoiding dramatic lighting changes for optimal anchor stability.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations"/> | '''[[Lighting conditions]]''' cause disruptions when abrupt changes occur (lights turning off/on, moving between dark and bright areas). Anchors may "jump" or temporarily disappear during adjustment. Documentation recommends even lighting and avoiding dramatic lighting changes for optimal anchor stability.<ref name="Qualium_Challenges"/><ref name="XREAL_Limitations"/> | ||
'''Dynamic environments''' with moving objects (people, equipment) occlude reference features, causing tracking issues and anchor instability. This is particularly problematic in crowded spaces or busy warehouses where the environment constantly changes. '''Surface requirements''' exclude transparent, semi-transparent, or reflective surfaces-mirrors, glass, and glossy surfaces prevent the system from detecting and tracking features effectively.<ref name="Qualium_Challenges"/> | '''[[Dynamic environments]]''' with moving objects (people, equipment) occlude reference features, causing tracking issues and anchor instability. This is particularly problematic in crowded spaces or busy warehouses where the environment constantly changes. '''Surface requirements''' exclude transparent, semi-transparent, or reflective surfaces-mirrors, glass, and glossy surfaces prevent the system from detecting and tracking features effectively.<ref name="Qualium_Challenges"/> | ||
=== Drift and accuracy issues === | === Drift and accuracy issues === | ||
'''Scale drift''' occurs as small tracking errors accumulate over time, causing "drift" where virtual objects slowly diverge from intended positions. Drift becomes noticeable at greater than 0.2 meters deviation. Mitigation strategies include regular anchor updates, recalibration, and creating fresh anchors when drift exceeds acceptable thresholds.<ref name="Qualium_Challenges"/><ref name="MagicLeapDrift">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Docs – Anchor Drift Mitigation]</ref> | '''[[Scale drift]]''' occurs as small tracking errors accumulate over time, causing "drift" where virtual objects slowly diverge from intended positions. Drift becomes noticeable at greater than 0.2 meters deviation. Mitigation strategies include regular anchor updates, recalibration, and creating fresh anchors when drift exceeds acceptable thresholds.<ref name="Qualium_Challenges"/><ref name="MagicLeapDrift">[https://developer-docs.magicleap.cloud/docs/guides/features/spaces/spatial-anchors/ Magic Leap Docs – Anchor Drift Mitigation]</ref> | ||
Sharing anchors between devices can introduce additional alignment errors, where each user sees the virtual content in a slightly different position-displacement can be subtle (4–5 cm) but often significant enough (up to 20 cm in some cases) to break the illusion of stability, especially for applications requiring high precision.<ref name="Meta_Drift">[https://communityforums.atmeta.com/discussions/dev-unity/spatial-anchors-issues Meta Community Forums – Spatial Anchors Issues]</ref> | Sharing anchors between devices can introduce additional alignment errors, where each user sees the virtual content in a slightly different position-displacement can be subtle (4–5 cm) but often significant enough (up to 20 cm in some cases) to break the illusion of stability, especially for applications requiring high precision.<ref name="Meta_Drift">[https://communityforums.atmeta.com/discussions/dev-unity/spatial-anchors-issues Meta Community Forums – Spatial Anchors Issues]</ref> | ||
'''Latency issues''' in anchor stabilization affect user experience-matching saved data to real-time visuals should occur in under 5 seconds ideally. Poor performance leads to user frustration and abandonment. Platform variance shows ARKit is typically faster than cloud anchors (ARCore/Azure Spatial Anchors) for initial anchor establishment.<ref name="Qualium_Challenges"/> | '''[[Latency issues]]''' in anchor stabilization affect user experience-matching saved data to real-time visuals should occur in under 5 seconds ideally. Poor performance leads to user frustration and abandonment. Platform variance shows ARKit is typically faster than cloud anchors (ARCore/Azure Spatial Anchors) for initial anchor establishment.<ref name="Qualium_Challenges"/> | ||
=== Scalability and performance === | === Scalability and performance === | ||
| Line 309: | Line 313: | ||
=== Best practices for implementation === | === Best practices for implementation === | ||
'''Environment scanning''' requires moving device slowly and sweeping camera across all surfaces, with scan duration of 5-15 seconds recommended. Developers should capture textures, furniture, and paintings for better feature detection while avoiding rapid viewpoint changes or head movements.<ref name="ARCoreCloud"/> | '''[[Environment scanning]]''' requires moving device slowly and sweeping camera across all surfaces, with scan duration of 5-15 seconds recommended. Developers should capture textures, furniture, and paintings for better feature detection while avoiding rapid viewpoint changes or head movements.<ref name="ARCoreCloud"/> | ||
'''Anchor density management''' includes creating mesh hierarchy with main anchor plus secondary anchors for detail, avoiding overloading small areas with too many anchors, deleting unused anchors to stay within quotas and reduce locate times, and considering anchor lifecycle: creation, persistence, sharing, deletion.<ref name="MSLearn2"/> | '''[[Anchor density management]]''' includes creating mesh hierarchy with main anchor plus secondary anchors for detail, avoiding overloading small areas with too many anchors, deleting unused anchors to stay within quotas and reduce locate times, and considering anchor lifecycle: creation, persistence, sharing, deletion.<ref name="MSLearn2"/> | ||
'''Error handling and user feedback''' should display clear messages when anchor tracking is limited ("Re-scanning environment to find anchor..."), guide users through rescanning process, and provide visual indicators of anchor status. Recovery strategies include re-scanning area regularly if objects drift more than 0.2 meters, creating fresh anchors if drift persists, and implementing fallback to stationary frame of reference for highly dynamic holograms.<ref name="Qualium_Challenges"/> | '''[[Error handling and user feedback]]''' should display clear messages when anchor tracking is limited ("Re-scanning environment to find anchor..."), guide users through rescanning process, and provide visual indicators of anchor status. Recovery strategies include re-scanning area regularly if objects drift more than 0.2 meters, creating fresh anchors if drift persists, and implementing fallback to stationary frame of reference for highly dynamic holograms.<ref name="Qualium_Challenges"/> | ||
== Privacy and ethical considerations == | == Privacy and ethical considerations == | ||
| Line 367: | Line 371: | ||
=== AI and spatial computing convergence === | === AI and spatial computing convergence === | ||
'''On-device machine learning''' in spatial computing enables Apple's Create ML to train object tracking models directly from command line-converting 3D model files through ML training for visionOS tracking. '''Multimodal AI''' bridges spatial and business data, processing text, image, audio, and spatial data types simultaneously.<ref name="Deloitte_Trends">[https://www2.deloitte.com/us/en/insights/focus/tech-trends/2025/tech-trends-future-of-spatial-computing.html Deloitte Tech Trends 2025 – Future of Spatial Computing]</ref> | '''[[On-device machine learning]]''' in spatial computing enables Apple's Create ML to train object tracking models directly from command line-converting 3D model files through ML training for visionOS tracking. '''[[Multimodal AI]]''' bridges spatial and business data, processing text, image, audio, and spatial data types simultaneously.<ref name="Deloitte_Trends">[https://www2.deloitte.com/us/en/insights/focus/tech-trends/2025/tech-trends-future-of-spatial-computing.html Deloitte Tech Trends 2025 – Future of Spatial Computing]</ref> | ||
'''Spatial AI''' at the intersection of spatial computing and AI/ML includes edge AI processors for local inference, spatial cameras and sensors for AR, hardware-in-the-loop simulation platforms, and geospatial data processing units. Deloitte Tech Trends 2025 predicts spatial computing will converge with '''agentic AI systems''' that are context-aware and capable of executing functions proactively, able to serve the right content at the right time without explicit commands, and multimodal while processing spatial, visual, and business data simultaneously.<ref name="Deloitte_Trends"/> | '''[[Spatial AI]]''' at the intersection of spatial computing and AI/ML includes edge AI processors for local inference, spatial cameras and sensors for AR, hardware-in-the-loop simulation platforms, and geospatial data processing units. Deloitte Tech Trends 2025 predicts spatial computing will converge with '''[[agentic AI]] systems''' that are context-aware and capable of executing functions proactively, able to serve the right content at the right time without explicit commands, and multimodal while processing spatial, visual, and business data simultaneously.<ref name="Deloitte_Trends"/> | ||
=== Future applications === | === Future applications === | ||