Spatial mapping: Difference between revisions
Appearance
Xinreality (talk | contribs) |
Xinreality (talk | contribs) |
||
| Line 81: | Line 81: | ||
Meshes can be further processed through filtering to reduce polygon count for better performance and can be textured for enhanced realism.<ref name="StereolabsDocsS2"/> The resulting spatial map data (often a dense mesh of the space) is continually updated as the device observes more of the environment or detects changes in it. Spatial mapping typically runs in the background in real-time on the device, so that the virtual content can be rendered as if anchored to fixed locations in the physical world.<ref name="MicrosoftDoc"/> | Meshes can be further processed through filtering to reduce polygon count for better performance and can be textured for enhanced realism.<ref name="StereolabsDocsS2"/> The resulting spatial map data (often a dense mesh of the space) is continually updated as the device observes more of the environment or detects changes in it. Spatial mapping typically runs in the background in real-time on the device, so that the virtual content can be rendered as if anchored to fixed locations in the physical world.<ref name="MicrosoftDoc"/> | ||
==== | ==== Fused Point Cloud ==== | ||
A point cloud represents the environment's geometry as a set of discrete 3D points, each with a position and often a color attribute.<ref name="StereolabsDocsS2"/><ref name="ZaubarLexicon"/> A "fused" point cloud is one that has been aggregated and refined over time from multiple sensor readings and camera perspectives. This fusion process creates a denser and more accurate representation than a single snapshot could provide.<ref name="StereolabsDocsS2"/> While point clouds are often an intermediate step before mesh generation, they can also be used directly by certain algorithms, particularly for localization, where the system matches current sensor readings against the stored point cloud to determine its position.<ref name="MetaHelp"/> | A [[point cloud]] represents the environment's geometry as a set of discrete 3D points, each with a position and often a color attribute.<ref name="StereolabsDocsS2"/><ref name="ZaubarLexicon"/> A "fused" point cloud is one that has been aggregated and refined over time from multiple sensor readings and camera perspectives. This fusion process creates a denser and more accurate representation than a single snapshot could provide.<ref name="StereolabsDocsS2"/> While point clouds are often an intermediate step before mesh generation, they can also be used directly by certain algorithms, particularly for localization, where the system matches current sensor readings against the stored point cloud to determine its position.<ref name="MetaHelp"/> | ||
=== Key Parameters and Configuration === | === Key Parameters and Configuration === | ||