Jump to content

Vergence-accommodation conflict: Difference between revisions

No edit summary
m Text replacement - "e.g.," to "for example"
Line 51: Line 51:
|-
|-
! [[Varifocal display|Varifocal]]
! [[Varifocal display|Varifocal]]
| [[Eye tracking|Eye-tracking]] determines the user's gaze depth, and the display system adjusts a single focal plane to match that depth using [[Tunable lens|tunable lenses]] (e.g., liquid crystal, liquid lens, Alvarez) or mechanically moving components (screen or lens). | Meta Reality Labs Butterscotch Varifocal (2023);<ref name="DisplayDaily2023">{{cite web |title=Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies |url=https://displaydaily.com/metas-going-to-siggraph-2023-and-showing-flamera-and-butterscotch-vr-technologies/ |website=Display Daily |date=2023-08-04}}</ref> UNC Wide-FOV deformable-mirror NED.<ref name="Dunn2017">{{cite web |last=Dunn |first=D. |last2=Tippets |first2=C. |last3=Torell |first3=K. |last4=Kellnhofer |first4=P. |last5=Akşit |first5=K. |last6=Didyk |first6=P. |last7=Myszkowski |first7=K. |last8=Luebke |first8=D. |last9=Fuchs |first9=H. |title=Wide Field-of-View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors |journal=IEEE Transactions on Visualization and Computer Graphics (TVCG) |volume=23 |issue=4 |pages=1411-1420 |url=https://ieeexplore.ieee.org/document/7850947 |year=2017}}</ref> | Delivers correct focus cue at the depth of fixation. Challenges include eye-tracking latency and accuracy, depth switching speed, limited depth range, and potentially incorrect blur cues for objects not at the fixation depth.<ref name="UNC2019">{{cite web |title=Dynamic Focus Augmented Reality Display |url=https://telepresence.web.unc.edu/research/dynamic-focus-augmented-reality-display/ |website=UNC Graphics and Virtual Reality Group |year=2019}}</ref>
| [[Eye tracking|Eye-tracking]] determines the user's gaze depth, and the display system adjusts a single focal plane to match that depth using [[Tunable lens|tunable lenses]] (for example liquid crystal, liquid lens, Alvarez) or mechanically moving components (screen or lens). | Meta Reality Labs Butterscotch Varifocal (2023);<ref name="DisplayDaily2023">{{cite web |title=Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies |url=https://displaydaily.com/metas-going-to-siggraph-2023-and-showing-flamera-and-butterscotch-vr-technologies/ |website=Display Daily |date=2023-08-04}}</ref> UNC Wide-FOV deformable-mirror NED.<ref name="Dunn2017">{{cite web |last=Dunn |first=D. |last2=Tippets |first2=C. |last3=Torell |first3=K. |last4=Kellnhofer |first4=P. |last5=Akşit |first5=K. |last6=Didyk |first6=P. |last7=Myszkowski |first7=K. |last8=Luebke |first8=D. |last9=Fuchs |first9=H. |title=Wide Field-of-View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors |journal=IEEE Transactions on Visualization and Computer Graphics (TVCG) |volume=23 |issue=4 |pages=1411-1420 |url=https://ieeexplore.ieee.org/document/7850947 |year=2017}}</ref> | Delivers correct focus cue at the depth of fixation. Challenges include eye-tracking latency and accuracy, depth switching speed, limited depth range, and potentially incorrect blur cues for objects not at the fixation depth.<ref name="UNC2019">{{cite web |title=Dynamic Focus Augmented Reality Display |url=https://telepresence.web.unc.edu/research/dynamic-focus-augmented-reality-display/ |website=UNC Graphics and Virtual Reality Group |year=2019}}</ref>
|-
|-
! [[Multifocal display|Multifocal / Multiplane]]
! [[Multifocal display|Multifocal / Multiplane]]
| Presents images on several fixed focal planes simultaneously (e.g., using stacked LCDs, beam splitters) or time-sequentially. Content is rendered on the plane closest to its virtual depth. | Stanford light-field HMD research;<ref name="Wired2015">{{cite web |last=Zhang |first=S. |title=The Obscure Neuroscience Problem That’s Plaguing VR |url=https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr/ |website=Wired |date=2015-08-11}}</ref> Magic Leap 1 (2 planes). | Provides more correct focus cues across multiple depths simultaneously without necessarily requiring eye-tracking. Challenges include complexity, cost, reduced brightness/contrast, potential visible transitions between planes, and limited number of planes.
| Presents images on several fixed focal planes simultaneously (for example using stacked LCDs, beam splitters) or time-sequentially. Content is rendered on the plane closest to its virtual depth. | Stanford light-field HMD research;<ref name="Wired2015">{{cite web |last=Zhang |first=S. |title=The Obscure Neuroscience Problem That’s Plaguing VR |url=https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr/ |website=Wired |date=2015-08-11}}</ref> Magic Leap 1 (2 planes). | Provides more correct focus cues across multiple depths simultaneously without necessarily requiring eye-tracking. Challenges include complexity, cost, reduced brightness/contrast, potential visible transitions between planes, and limited number of planes.
|-
|-
! [[Light field display|Light Field]]
! [[Light field display|Light Field]]
Line 63: Line 63:
|-
|-
! [[Retinal projection|Retinal Projection / Scanning]]
! [[Retinal projection|Retinal Projection / Scanning]]
| Scans modulated light (often laser) directly onto the retina, potentially creating an image that is always in focus regardless of the eye's accommodation state (Maxwellian view). | Research systems; formerly North Focals (acquired by Google). | Can bypass VAC by eliminating the need for accommodation. Challenges include small [[Eyebox|eyebox]], potential for visual artifacts (e.g., [[Floater|floaters]] becoming more visible), safety concerns, and achieving high resolution/FOV.
| Scans modulated light (often laser) directly onto the retina, potentially creating an image that is always in focus regardless of the eye's accommodation state (Maxwellian view). | Research systems; formerly North Focals (acquired by Google). | Can bypass VAC by eliminating the need for accommodation. Challenges include small [[Eyebox|eyebox]], potential for visual artifacts (for example [[Floater|floaters]] becoming more visible), safety concerns, and achieving high resolution/FOV.
|-
|-
! Emerging Optics
! Emerging Optics
Line 75: Line 75:


==Current Research Frontiers==
==Current Research Frontiers==
*  '''High-Resolution Varifocal Displays''': Prototypes like Meta’s Butterscotch demonstrate progress towards retinal resolution (e.g., 60 pixels per degree) combined with reasonably fast depth switching, suggesting potential commercial viability.<ref name="DisplayDaily2023" />
*  '''High-Resolution Varifocal Displays''': Prototypes like Meta’s Butterscotch demonstrate progress towards retinal resolution (for example 60 pixels per degree) combined with reasonably fast depth switching, suggesting potential commercial viability.<ref name="DisplayDaily2023" />
*  '''Focus-Correct Passthrough AR''': Integrating varifocal or multifocal optics into [[Video passthrough|video-see-through]] AR systems to correctly render both real-world and virtual imagery at appropriate focal depths.<ref name="UNC2019" />
*  '''Focus-Correct Passthrough AR''': Integrating varifocal or multifocal optics into [[Video passthrough|video-see-through]] AR systems to correctly render both real-world and virtual imagery at appropriate focal depths.<ref name="UNC2019" />
*  '''Standards and Health Implications''': Ongoing work by standards bodies (e.g., ISO TC159, IEC TC100) to develop guidelines for extended VR/AR use, particularly concerning children and workplace applications.
*  '''Standards and Health Implications''': Ongoing work by standards bodies (for example ISO TC159, IEC TC100) to develop guidelines for extended VR/AR use, particularly concerning children and workplace applications.
*  '''Perceptual Modeling''': Research using large-sample studies to better understand individual variability in the accommodation-vergence relationship, potentially enabling personalized comfort settings or adaptive display parameters.<ref name="Lin2022" />
*  '''Perceptual Modeling''': Research using large-sample studies to better understand individual variability in the accommodation-vergence relationship, potentially enabling personalized comfort settings or adaptive display parameters.<ref name="Lin2022" />