Jump to content

Vergence-accommodation conflict: Difference between revisions

RealEditor (talk | contribs)
remove redlinks and restore mistakenly deleted diagram
RealEditor (talk | contribs)
No edit summary
 
(One intermediate revision by the same user not shown)
Line 58: Line 58:
|-
|-
! [[Light field display|Light Field]]
! [[Light field display|Light Field]]
| Attempts to reconstruct the 4D light field of the scene (rays of light with position and direction). This allows the eye's lens to naturally focus at different depths within the reproduced volume. | Research using lenslet arrays, parallax barriers, holographic optical elements, super-multi-view displays. | Potentially provides true continuous focus cues without eye-tracking. Challenges include extremely high resolution and bandwidth requirements, computational complexity, limited field of view, and tradeoffs between spatial and angular resolution.
| Attempts to reconstruct the 4D light field of the scene (rays of light with position and direction). This allows the eye's lens to naturally focus at different depths within the reproduced volume. | Research using [[lenslet array]]s, parallax barriers, holographic optical elements, super-multi-view displays. | Potentially provides true continuous focus cues without eye-tracking. Challenges include extremely high resolution and bandwidth requirements, computational complexity, limited field of view, and tradeoffs between spatial and angular resolution.
|-
|-
! [[Holography|Holographic Displays]]
! [[Holography|Holographic Displays]]
| Aims to fully reconstruct the wavefront of light from the virtual scene using diffraction patterns generated by [[Spatial light modulator|spatial light modulators]]. | Research by Microsoft Research, VividQ, Light Field Lab. | Theoretically the ultimate solution, providing all depth cues including accommodation correctly. Challenges include high computational cost ("speckle" noise), limited field of view, and hardware complexity for real-time, high-quality HMDs.
| Aims to fully reconstruct the wavefront of light from the virtual scene using diffraction patterns generated by [[Spatial light modulator|spatial light modulators]]. | Research by Microsoft Research, [[VividQ]], Light Field Lab. | Theoretically the ultimate solution, providing all depth cues including accommodation correctly. Challenges include high computational cost ("speckle" noise), limited field of view, and hardware complexity for real-time, high-quality HMDs.
|-
|-
! [[Retinal projection|Retinal Projection / Scanning]]
! [[Retinal projection|Retinal Projection / Scanning]]