Jump to content

Passthrough: Difference between revisions

Line 79: Line 79:
*'''Active Depth Sensing:''' Incorporating dedicated depth sensors (IR ToF, Structured Light, [[LiDAR]]) provides robust, real-time 3D geometry information of the environment, improving reprojection accuracy, occlusion handling, and spatial anchoring.<ref name="UploadVR_Q3Review"/><ref name="VisionProPassthrough"/> This enables features like quick room meshing via APIs (e.g., Meta's Spatial Anchors, Apple's ARKit/RoomPlan).
*'''Active Depth Sensing:''' Incorporating dedicated depth sensors (IR ToF, Structured Light, [[LiDAR]]) provides robust, real-time 3D geometry information of the environment, improving reprojection accuracy, occlusion handling, and spatial anchoring.<ref name="UploadVR_Q3Review"/><ref name="VisionProPassthrough"/> This enables features like quick room meshing via APIs (e.g., Meta's Spatial Anchors, Apple's ARKit/RoomPlan).
*'''[[Machine Learning]] Enhancements:''' Using AI/ML for various tasks:
*'''[[Machine Learning]] Enhancements:''' Using AI/ML for various tasks:
    * Image upscaling and denoising to improve clarity, especially in low light.
**Image upscaling and denoising to improve clarity, especially in low light.
    * Advanced [[reprojection]] algorithms for more accurate perspective correction.<ref name="QuestProPassthrough"/>
**Advanced [[reprojection]] algorithms for more accurate perspective correction.<ref name="QuestProPassthrough"/>
    * Scene segmentation to identify objects (hands, people, furniture) for better interaction and occlusion.<ref name="XRToday_def"/>
**Scene segmentation to identify objects (hands, people, furniture) for better interaction and occlusion.<ref name="XRToday_def"/>
    * Improving [[Simultaneous localization and mapping|SLAM]] for more stable tracking and anchoring of virtual objects.
**Improving [[Simultaneous localization and mapping|SLAM]] for more stable tracking and anchoring of virtual objects.
*'''Reprojection and Virtual Cameras:''' Software techniques that warp the captured camera images based on depth data to synthesize a view from the user's actual eye positions ("virtual cameras"<ref name="KGuttag_Align"/>). [[Asynchronous TimeWarp|Time-warping]] techniques can further reduce perceived latency by adjusting the image based on last-moment head movements.
*'''Reprojection and Virtual Cameras:''' Software techniques that warp the captured camera images based on depth data to synthesize a view from the user's actual eye positions ("virtual cameras"<ref name="KGuttag_Align"/>). [[Asynchronous TimeWarp|Time-warping]] techniques can further reduce perceived latency by adjusting the image based on last-moment head movements.
*'''Improved Optics and Displays:''' [[Pancake lens|Pancake lenses]] allow for thinner headsets where cameras can potentially be placed closer to the eyes, reducing offset. Higher resolution, higher [[dynamic range]] (e.g., [[Micro-OLED]] in Vision Pro), and faster refresh rate displays improve the fidelity of the displayed passthrough feed. Careful calibration of lens distortion profiles is also applied.<ref name="RoadToVR_PSVR2"/>
*'''Improved Optics and Displays:''' [[Pancake lens|Pancake lenses]] allow for thinner headsets where cameras can potentially be placed closer to the eyes, reducing offset. Higher resolution, higher [[dynamic range]] (e.g., [[Micro-OLED]] in Vision Pro), and faster refresh rate displays improve the fidelity of the displayed passthrough feed. Careful calibration of lens distortion profiles is also applied.<ref name="RoadToVR_PSVR2"/>