Markerless outside-in tracking: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
Line 18: | Line 18: | ||
Marker-based outside-in systems (HTC Vive Lighthouse, PlayStation VR) attach active LEDs or retro-reflective spheres to the headset or controllers; external sensors triangulate these explicit targets, achieving sub-millimetre precision and sub-10 ms latency. Markerless alternatives dispense with physical targets, improving user comfort and reducing setup time, but at the cost of: | Marker-based outside-in systems (HTC Vive Lighthouse, PlayStation VR) attach active LEDs or retro-reflective spheres to the headset or controllers; external sensors triangulate these explicit targets, achieving sub-millimetre precision and sub-10 ms latency. Markerless alternatives dispense with physical targets, improving user comfort and reducing setup time, but at the cost of: | ||
* **Lower positional accuracy and higher latency** – Depth-sensor noise and computational overhead introduce millimetre- to centimetre-level error and ~20–30 ms end-to-end latency. | * **Lower positional accuracy and higher latency** – Depth-sensor noise and computational overhead introduce millimetre- to centimetre-level error and ~20–30 ms end-to-end latency. | ||
* **Sensitivity to occlusion** – If a body part leaves the camera’s line of sight, the model loses track until the part re-enters view. | * **Sensitivity to occlusion** – If a body part leaves the camera’s line of sight, the model loses track until the part re-enters view. | ||
Line 25: | Line 25: | ||
! Year !! System !! Notes | ! Year !! System !! Notes | ||
|- | |- | ||
| 2003 || [[EyeToy]] (PlayStation 2) || 2-D silhouette tracking with a single RGB camera for casual gesture-based games. | | 2003 || [[EyeToy]] (PlayStation 2) || 2-D silhouette tracking with a single RGB camera for casual gesture-based games. | ||
|- | |- | ||
| 2010 || [[Kinect]] for Xbox 360 || Consumer launch of a structured-light depth sensor delivering real-time full-body skeletons (up to six users).<ref name="Microsoft2010" /> | | 2010 || [[Kinect]] for Xbox 360 || Consumer launch of a structured-light depth sensor delivering real-time full-body skeletons (up to six users).<ref name="Microsoft2010" /> | ||
|- | |- | ||
| 2014 – 2016 || Research prototypes || Studies showed Kinect V2 could supply 6-DOF head, hand, and body input to DIY VR HMDs. | | 2014 – 2016 || Research prototypes || Studies showed Kinect V2 could supply 6-DOF head, hand, and body input to DIY VR HMDs. | ||
|- | |- | ||
| 2017 || Kinect production ends || Microsoft discontinued Kinect hardware as commercial VR shifted toward marker-based and inside-out solutions.<ref name="Microsoft2017" /> | | 2017 || Kinect production ends || Microsoft discontinued Kinect hardware as commercial VR shifted toward marker-based and inside-out solutions.<ref name="Microsoft2017" /> | ||
Line 36: | Line 36: | ||
==Applications== | ==Applications== | ||
* **Gaming and Entertainment** – Titles like ''Kinect Sports'' mapped whole-body actions directly onto avatars. Enthusiast VR chat platforms still use Kinect skeletons to animate full-body avatars. | * **Gaming and Entertainment** – Titles like ''Kinect Sports'' mapped whole-body actions directly onto avatars. Enthusiast VR chat platforms still use Kinect skeletons to animate full-body avatars. | ||
* **Rehabilitation and Exercise** – Clinicians employ depth-based pose tracking to monitor range-of-motion exercises without encumbering patients with sensors. | * **Rehabilitation and Exercise** – Clinicians employ depth-based pose tracking to monitor range-of-motion exercises without encumbering patients with sensors. | ||
* **Interactive installations** – Museums deploy wall-mounted depth cameras to create “magic-mirror” AR exhibits that overlay virtual costumes onto visitors in real time. | * **Interactive installations** – Museums deploy wall-mounted depth cameras to create “magic-mirror” AR exhibits that overlay virtual costumes onto visitors in real time. | ||
* **Telepresence** – Multi-Kinect arrays stream volumetric representations of remote participants into shared virtual spaces. | * **Telepresence** – Multi-Kinect arrays stream volumetric representations of remote participants into shared virtual spaces. | ||
Line 48: | Line 48: | ||
==Disadvantages== | ==Disadvantages== | ||
* '''Occlusion sensitivity''' – Furniture or other players can block the line of sight, causing intermittent loss of tracking. | * '''Occlusion sensitivity''' – Furniture or other players can block the line of sight, causing intermittent loss of tracking. | ||
* '''Reduced accuracy and jitter''' – Compared with marker-based solutions, joint estimates exhibit higher positional noise, especially during fast or complex motion. | * '''Reduced accuracy and jitter''' – Compared with marker-based solutions, joint estimates exhibit higher positional noise, especially during fast or complex motion. | ||
* '''Environmental constraints''' – Bright sunlight, glossy surfaces, and feature-poor backgrounds degrade depth or feature extraction quality. | * '''Environmental constraints''' – Bright sunlight, glossy surfaces, and feature-poor backgrounds degrade depth or feature extraction quality. | ||
* '''Limited range and FOV''' – Most consumer depth cameras operate effectively only within 0.8–5 m; beyond that, depth resolution and skeleton stability decrease. | * '''Limited range and FOV''' – Most consumer depth cameras operate effectively only within 0.8–5 m; beyond that, depth resolution and skeleton stability decrease. |