Spatial mapping: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) |
||
| Line 75: | Line 75: | ||
The digital model of the environment created through spatial mapping is typically stored and utilized in one of two primary formats. | The digital model of the environment created through spatial mapping is typically stored and utilized in one of two primary formats. | ||
==== Mesh ==== | ==== [[Mesh]] ==== | ||
A mesh represents the geometry of the scene as a continuous surface.<ref name="StereolabsDocsS2"/> It is composed of a set of interconnected, watertight triangles defined by vertices (points in 3D space) and faces (the triangles connecting the vertices).<ref name="StereolabsDocsS2"/><ref name="ViveBlog"/> This representation is highly efficient for computer graphics and is ideal for rendering visualizations of the environment. It is also essential for physics simulations, as the mesh surfaces can be used for collision detection, allowing virtual objects to interact realistically with the mapped world.<ref name="MSDocsUnity">{{cite web |url=https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/spatial-mapping-in-unity |title=Spatial mapping in Unity - Mixed Reality |publisher=Microsoft |access-date=2025-10-23}}</ref><ref name="HoloLensYouTube">{{cite web |url=https://www.youtube.com/watch?v=zff2aQ1RaVo |title=HoloLens - What is Spatial Mapping? |publisher=Microsoft |access-date=2025-10-23}}</ref> | A mesh represents the geometry of the scene as a continuous surface.<ref name="StereolabsDocsS2"/> It is composed of a set of interconnected, watertight triangles defined by vertices (points in 3D space) and faces (the triangles connecting the vertices).<ref name="StereolabsDocsS2"/><ref name="ViveBlog"/> This representation is highly efficient for computer graphics and is ideal for rendering visualizations of the environment. It is also essential for physics simulations, as the mesh surfaces can be used for collision detection, allowing virtual objects to interact realistically with the mapped world.<ref name="MSDocsUnity">{{cite web |url=https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/spatial-mapping-in-unity |title=Spatial mapping in Unity - Mixed Reality |publisher=Microsoft |access-date=2025-10-23}}</ref><ref name="HoloLensYouTube">{{cite web |url=https://www.youtube.com/watch?v=zff2aQ1RaVo |title=HoloLens - What is Spatial Mapping? |publisher=Microsoft |access-date=2025-10-23}}</ref> | ||
| Line 81: | Line 81: | ||
Meshes can be further processed through filtering to reduce polygon count for better performance and can be textured for enhanced realism.<ref name="StereolabsDocsS2"/> The resulting spatial map data (often a dense mesh of the space) is continually updated as the device observes more of the environment or detects changes in it. Spatial mapping typically runs in the background in real-time on the device, so that the virtual content can be rendered as if anchored to fixed locations in the physical world.<ref name="MicrosoftDoc"/> | Meshes can be further processed through filtering to reduce polygon count for better performance and can be textured for enhanced realism.<ref name="StereolabsDocsS2"/> The resulting spatial map data (often a dense mesh of the space) is continually updated as the device observes more of the environment or detects changes in it. Spatial mapping typically runs in the background in real-time on the device, so that the virtual content can be rendered as if anchored to fixed locations in the physical world.<ref name="MicrosoftDoc"/> | ||
==== Fused Point Cloud ==== | ==== [[Fused Point Cloud]] ==== | ||
A point cloud represents the environment's geometry as a set of discrete 3D points, each with a position and often a color attribute.<ref name="StereolabsDocsS2"/><ref name="ZaubarLexicon"/> A "fused" point cloud is one that has been aggregated and refined over time from multiple sensor readings and camera perspectives. This fusion process creates a denser and more accurate representation than a single snapshot could provide.<ref name="StereolabsDocsS2"/> While point clouds are often an intermediate step before mesh generation, they can also be used directly by certain algorithms, particularly for localization, where the system matches current sensor readings against the stored point cloud to determine its position.<ref name="MetaHelp"/> | A point cloud represents the environment's geometry as a set of discrete 3D points, each with a position and often a color attribute.<ref name="StereolabsDocsS2"/><ref name="ZaubarLexicon"/> A "fused" point cloud is one that has been aggregated and refined over time from multiple sensor readings and camera perspectives. This fusion process creates a denser and more accurate representation than a single snapshot could provide.<ref name="StereolabsDocsS2"/> While point clouds are often an intermediate step before mesh generation, they can also be used directly by certain algorithms, particularly for localization, where the system matches current sensor readings against the stored point cloud to determine its position.<ref name="MetaHelp"/> | ||