Jump to content

Head-mounted display: Difference between revisions

No edit summary
Line 179: Line 179:
The HMD landscape continues to evolve rapidly, with several promising developments on the horizon:
The HMD landscape continues to evolve rapidly, with several promising developments on the horizon:


*  '''Display Advancements''':
===Display Advancements===
    *  [[Varifocal display|Varifocal Displays]]: Systems that dynamically adjust focal depth based on where the user is looking (using eye tracking) or based on scene content, addressing the vergence-accommodation conflict. Technologies include movable lenses/displays, [[Liquid crystal lens|liquid crystal lenses]], [[Alvarez lens|Alvarez lenses]], and multi-focal plane displays.<ref name="Rathinavel2018Varifocal">Rathinavel, K.; et al. (2018). "An extended depth-at-field volumetric near-eye augmented reality display". ''IEEE Transactions on Visualization and Computer Graphics''. '''24''' (11): 2857–2866. doi:10.1109/TVCG.2018.2868565.</ref>
*'''[[Varifocal display|Varifocal Displays]]: Systems that dynamically adjust focal depth based on where the user is looking (using eye tracking) or based on scene content, addressing the vergence-accommodation conflict. Technologies include movable lenses/displays, [[Liquid crystal lens|liquid crystal lenses]], [[Alvarez lens|Alvarez lenses]], and multi-focal plane displays.<ref name="Rathinavel2018Varifocal">Rathinavel, K.; et al. (2018). "An extended depth-at-field volumetric near-eye augmented reality display". ''IEEE Transactions on Visualization and Computer Graphics''. '''24''' (11): 2857–2866. doi:10.1109/TVCG.2018.2868565.</ref>
    *   [[Light field|Light Field Displays]]: Generate a more complete representation of light, allowing the eye to focus naturally at different depths within the virtual scene. Still complex and computationally intensive.<ref name="Huang2015LightFieldStereoscope">Huang, F. C.; Chen, K.; Wetzstein, G. (2015). "The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues". ''ACM Transactions on Graphics''. '''34''' (4): 1–12. doi:10.1145/2766949.</ref>
*'''[[Light field|Light Field Displays]]: Generate a more complete representation of light, allowing the eye to focus naturally at different depths within the virtual scene. Still complex and computationally intensive.<ref name="Huang2015LightFieldStereoscope">Huang, F. C.; Chen, K.; Wetzstein, G. (2015). "The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues". ''ACM Transactions on Graphics''. '''34''' (4): 1–12. doi:10.1145/2766949.</ref>
    *   [[Holographic display|Holographic Displays]]: Aim to reconstruct the wavefront of light from a 3D scene, potentially offering the most natural 3D viewing experience without conflicts. True holographic HMDs are still highly experimental.
*'''[[Holographic display|Holographic Displays]]: Aim to reconstruct the wavefront of light from a 3D scene, potentially offering the most natural 3D viewing experience without conflicts. True holographic HMDs are still highly experimental.
    *   Higher resolution, brightness, efficiency via Micro-OLED and [[MicroLED]] maturation.
*'''Higher resolution, brightness, efficiency via Micro-OLED and [[MicroLED]] maturation.
*  '''Form Factor Evolution''':
===Form Factor Evolution===
    *  [[Lightweight Designs]]: Advanced optics (pancake, [[Metalens|metalenses]], HOEs) and display technologies are enabling significantly thinner, lighter headsets (sub-300g or even sub-100g).
*'''[[Lightweight Designs]]: Advanced optics (pancake, [[Metalens|metalenses]], HOEs) and display technologies are enabling significantly thinner, lighter headsets (sub-300g or even sub-100g).
    *   [[Smartglasses|AR Glasses]]: The long-term goal for AR is achieving normal eyeglass form factors with all-day wearability and significant compute/display capabilities. Projects like [[Project Aria]] (Meta research) and rumored [[Apple smart glasses|Apple glasses]] point toward this future.<ref name="Delaney2021ARGlasses">Delaney, K. (2021). "The race toward human-centered AR glasses". ''IEEE Computer Graphics and Applications''. '''41''' (5): 112–115. doi:10.1109/MCG.2021.3097740.</ref>
*'''[[Smartglasses|AR Glasses]]: The long-term goal for AR is achieving normal eyeglass form factors with all-day wearability and significant compute/display capabilities. Projects like [[Project Aria]] (Meta research) and rumored [[Apple smart glasses|Apple glasses]] point toward this future.<ref name="Delaney2021ARGlasses">Delaney, K. (2021). "The race toward human-centered AR glasses". ''IEEE Computer Graphics and Applications''. '''41''' (5): 112–115. doi:10.1109/MCG.2021.3097740.</ref>
*  '''Sensory Expansion''':
===Sensory Expansion===
    *  [[Haptic technology|Advanced Haptic Feedback]]: Beyond simple controller rumble, providing more nuanced tactile sensations via gloves ([[HaptX]]), bodysuits ([[bHaptics]]), ultrasound ([[Ultraleap]]), or other actuators to simulate touch, texture, and impact.
*'''[[Haptic technology|Advanced Haptic Feedback]]: Beyond simple controller rumble, providing more nuanced tactile sensations via gloves ([[HaptX]]), bodysuits ([[bHaptics]]), ultrasound ([[Ultraleap]]), or other actuators to simulate touch, texture, and impact.
    *   [[Digital scent technology|Olfactory Displays]]: Systems that generate scents synchronized with virtual environments to enhance immersion (e.g., [[OVR Technology]]).
*'''[[Digital scent technology|Olfactory Displays]]: Systems that generate scents synchronized with virtual environments to enhance immersion (e.g., [[OVR Technology]]).
    *   [[Motion capture|Full-body Tracking]]: Moving beyond head and hands to track limb and torso movements for more complete avatar embodiment, using external trackers ([[Vive Tracker]]), webcam-based AI solutions, or integrated sensors.
*'''[[Motion capture|Full-body Tracking]]: Moving beyond head and hands to track limb and torso movements for more complete avatar embodiment, using external trackers ([[Vive Tracker]]), webcam-based AI solutions, or integrated sensors.
*  '''Computational Capabilities''':
===Computational Capabilities===
    *  [[Edge computing|Edge/Cloud Computing]]: Offloading demanding processing (rendering, AI) from standalone HMDs to nearby edge servers or the cloud to enable higher fidelity experiences while maintaining mobility (e.g., [[NVIDIA CloudXR]], [[Plutosphere]]).<ref name="Liu2019EdgeAR">Liu, L.; Li, H.; Gruteser, M. (2019). "Edge assisted real-time object detection for mobile augmented reality". ''Proceedings of the 25th Annual International Conference on Mobile Computing and Networking'': 1–16. doi:10.1145/3300061.3345431.</ref>
*'''[[Edge computing|Edge/Cloud Computing]]: Offloading demanding processing (rendering, AI) from standalone HMDs to nearby edge servers or the cloud to enable higher fidelity experiences while maintaining mobility (e.g., [[NVIDIA CloudXR]], [[Plutosphere]]).<ref name="Liu2019EdgeAR">Liu, L.; Li, H.; Gruteser, M. (2019). "Edge assisted real-time object detection for mobile augmented reality". ''Proceedings of the 25th Annual International Conference on Mobile Computing and Networking'': 1–16. doi:10.1145/3300061.3345431.</ref>
    *   [[Artificial intelligence|AI Integration]]: On-device AI for smarter environment understanding, more robust hand/eye tracking, predictive rendering, personalized experiences, intelligent virtual agents, and natural language interaction.
*'''[[Artificial intelligence|AI Integration]]: On-device AI for smarter environment understanding, more robust hand/eye tracking, predictive rendering, personalized experiences, intelligent virtual agents, and natural language interaction.
*   '''Interfaces''':
===Interfaces===
    *  Improved Hand/Eye/Face Tracking: Higher fidelity tracking of expressions and micro-movements for more realistic avatars and nuanced control.
*'''Improved Hand/Eye/Face Tracking: Higher fidelity tracking of expressions and micro-movements for more realistic avatars and nuanced control.
    *   [[Brain-Computer Interface|Neural Interfaces]]: Non-invasive BCIs (e.g., EMG wristbands, EEG sensors) may offer supplementary input channels in the future.
*'''[[Brain-Computer Interface|Neural Interfaces]]: Non-invasive BCIs (e.g., EMG wristbands, EEG sensors) may offer supplementary input channels in the future.


==Market Outlook==
==Market Outlook==