Jump to content

Vergence-accommodation conflict: Difference between revisions

No edit summary
No edit summary
Tag: Reverted
Line 1: Line 1:
{{see also|Terms|Technical Terms}}
{{see also|Terms|Technical Terms}}
'''Vergence-accommodation conflict''' ('''VAC'''), also known as '''accommodation-vergence conflict''' or sometimes '''accommodation–vergence mismatch''', is a visual and perceptual phenomenon that occurs when the [[Brain|brain]] receives mismatching cues between the distance to which the eyes are pointed or converged ([[Vergence|vergence]]) and the distance at which the eyes' lenses are focused ([[Accommodation (eye)|accommodation]]).<ref name="Hoffman2008">{{cite web |last=Hoffman |first=D. M. |last2=Girshick |first2=A. R. |last3=Akeley |first3=K. |last4=Banks |first4=M. S. |title=Vergence–Accommodation Conflicts Hinder Visual Performance and Cause Visual Fatigue (Journal of Vision, Vol. 8, Issue 3, Article 33) |url=https://jov.arvojournals.org/article.aspx?articleid=2192424 |year=2008 |access-date= [Insert Access Date Here] }}</ref><ref name="Kreylos2014VAC">{{cite web |last=Kreylos |first=Oliver |title=Accommodation and Vergence in Head-mounted Displays |url=http://doc-ok.org/?p=1602 |website=Doc-Ok.org |date=2014-04-13 |access-date= [Insert Access Date Here] }}</ref> Because natural viewing conditions tightly couple these two mechanisms, breaking that link is a primary cause of visual discomfort and performance issues in modern [[Virtual reality|virtual reality]] (VR), [[Augmented reality|augmented reality]] (AR), and other [[Stereoscopy|stereoscopic]] 3-D displays, including nearly all mainstream [[Head-Mounted Display|head-mounted displays]] (HMDs).<ref name="Hoffman2008" />
'''Vergence‑accommodation conflict''' ('''VAC'''), also called '''accommodation‑vergence conflict''' or '''accommodation–vergence mismatch''', is a visual–perceptual problem that arises when the cues that drive the eyes’ rotation ('''[[Vergence]]''') and the cues that drive the crystalline lens’s focus ('''[[Accommodation (eye)|accommodation]]''') specify **different viewing distances**. Under natural viewing the two mechanisms are neurally coupled via the [[Accommodation reflex|accommodation–vergence reflex]], so they agree; most stereoscopic displays break that coupling, which is now recognised as a primary source of visual discomfort, blurred vision and reduced task performance in modern [[Virtual reality|VR]], [[Augmented reality|AR]], and other [[Stereoscopy|stereoscopic]] systems.<ref name="Hoffman2008" /><ref name="Shibata2011" />


==Physiological Basis==
==Physiological basis==
When fixating on an object in the real world, the human [[Visual system|visual system]] simultaneously performs two key actions:
When fixating a real‑world target the visual system executes two tightly linked responses:
*   '''[[Vergence]]''': The two eyes rotate inwards ([[Convergence (eye)|convergence]]) or outwards ([[Divergence (eye)|divergence]]) via the [[Extraocular muscles|extraocular muscles]] so their lines of sight intersect at the target object, enabling single [[Binocular vision|binocular vision]]. This response is primarily driven by [[Binocular disparity|retinal disparity]].
* '''Vergence''' – the two eyes rotate (converge or diverge) so that their lines of sight intersect at the target, driven mainly by binocular disparity.
*   '''[[Accommodation (eye)|Accommodation]]''': The [[Ciliary muscle|ciliary muscle]] adjusts the shape and thus the [[Optical power|optical power]] of the [[Crystalline lens|crystalline lens]] within each eye to bring the image of the target object into sharp focus on the [[Retina|retina]]. This response is primarily driven by retinal blur.
* '''Accommodation''' – the ciliary muscle changes the lens’s optical power so that the retinal image is in sharp focus, driven mainly by retinal blur.


In natural vision, these two systems are tightly linked through fast, reciprocal neurological signals known as the [[Accommodation reflex|accommodation–vergence reflex]].<ref name="Kreylos2014VAC" /><ref name="Kramida2016">{{cite web |last=Kramida |first=G. |title=Resolving the Vergence–Accommodation Conflict in Head-Mounted Displays (IEEE Transactions on Visualization and Computer Graphics, Vol. 22, Issue 7) |url=https://ieeexplore.ieee.org/document/7296633 |year=2016 |access-date= [Insert Access Date Here] }}</ref> This coupling ensures that the eyes focus at the same distance they are pointed, allowing for clear, comfortable, and efficient vision. Stereoscopic displays disrupt this natural coupling because binocular disparity cues drive the vergence system to the ''simulated'' depth of a virtual object, while the accommodation system is driven by blur cues to focus on the ''physical'' display surface, which is typically at a fixed optical distance.<ref name="Kramida2016Resolving">{{cite web |last=Kramida |first=Gregory |last2=Varshney |first2=Amitabh |title=Resolving the Vergence-Accommodation Conflict in Head Mounted Displays |url=https://www.cs.umd.edu/sites/default/files/scholarly_papers/Kramidarev.pdf |website=Department of Computer Science, University of Maryland |year=2016 |access-date= [Insert Access Date Here] }}</ref>
Reciprocal neurons link the two responses so they normally operate at the *same* distance. Stereoscopic displays decouple them because disparity cues stimulate vergence to the *simulated* depth of a virtual object, while blur cues still stimulate accommodation to the *physical* surface of the display, which is usually fixed at one optical distance.<ref name="Kramida2016" />


==Causes / Occurrence in Display Technologies==
==Causes / occurrence in display technologies==
The vergence-accommodation conflict is inherent in display technologies where the perceived depth of content differs from the physical or optical distance of the display surface:
* '''Fixed‑focus HMDs''' – Nearly all consumer VR headsets place the screens at a single focal distance between ≈1 m and 2 m. Vergence varies with rendered disparity, so objects that appear nearer than the focal plane create a **positive VAC** (vergence distance < accommodation distance), whereas objects that appear farther create a **negative VAC** (vergence distance > accommodation distance).<ref name="Shibata2011" />
* '''3‑D cinema & television''' – The screen is farther away and subtends a smaller field‑of‑view, so conflicts are weaker, but they still constrain comfortable stereoscopic depth budgets.<ref name="ISO2015" />
* '''Optical see‑through AR''' – Virtual imagery is often rendered at a fixed focus (e.g., ≈2 m). When it is overlaid on real objects at different depths, users may be unable to focus sharply on both simultaneously, degrading registration and comfort.<ref name="Zhou2021" />


*  '''Fixed-focus HMDs''': Nearly all consumer VR and many AR headsets use lenses to place a virtual image of the display screens at a fixed focal distance, typically between 1.3 and 2 meters (though this varies).<ref name="Kreylos2013HMD">{{cite web |last=Kreylos |first=Oliver |title=Head-mounted Displays and Lenses |url=http://doc-ok.org/?p=1360 |website=Doc-Ok.org |date=2013-07-24 |access-date= [Insert Access Date Here] }}</ref> Viewers must accommodate to this fixed plane to see a sharp image. However, stereoscopic rendering creates virtual objects that appear at various depths, requiring vergence changes. Objects rendered virtually nearer than the fixed focal plane induce a ''positive VAC'' (eyes converge more than they accommodate), while objects rendered virtually farther induce a ''negative VAC'' (eyes converge less than they accommodate).<ref name="Shibata2011">{{cite web |last=Shibata |first=T. |last2=Kim |first2=J. |last3=Hoffman |first3=D. M. |last4=Banks |first4=M. S. |title=The Zone of Comfort: Predicting Visual Discomfort With Stereo Displays (Journal of Vision, Vol. 11, Issue 8, Article 11) |url=https://jov.arvojournals.org/article.aspx?articleid=2192969 |year=2011 |access-date= [Insert Access Date Here] }}</ref>
==Effects and symptoms==
*  '''[[3D television|3D Cinema and Television]]''': VAC also occurs here, but symptoms are often milder. The screen is typically farther away, the [[Field of view|field of view]] is smaller, and content creators can limit disparities to keep virtual objects within a "zone of comfort" relative to the screen distance.<ref name="ISO2015">ISO 9241-392:2015, Ergonomics of human-system interaction — Part 392: Ergonomic requirements for the reduction of visual fatigue from stereoscopic images. International Organization for Standardization, 2015.</ref>
Prolonged VAC drives extraocular and ciliary muscles in conflicting directions, empirically producing:<ref name="Hoffman2008" /><ref name="Lin2022" />
*   '''[[Optical see-through display|Optical See-Through (OST) AR]]''': In OST AR glasses, virtual images (often at a fixed focus) are overlaid onto the real world. This creates a conflict not only between vergence and accommodation for virtual objects but also a potential mismatch between focusing on real-world objects at various distances and the fixed focus of the virtual overlay. This can introduce depth discontinuities, reduce the perceived registration accuracy of virtual objects, and cause discomfort.<ref name="Zhou2021">{{cite web |last=Zhou |first=Y. |last2=Li |first2=X. |last3=Yuan |first3=C. |title=Vergence-Accommodation Conflict in Optical See-Through Display: Review and Prospect (Results in Optics, Vol. 5) |url=https://www.sciencedirect.com/science/article/pii/S266717002100160X |year=2021 |access-date= [Insert Access Date Here] }}</ref>
* visual fatigue / eyestrain and headaches;
* transient or persistent blur;
* diplopia (double vision) when fusion fails;
* reduced reading speed and depth‑judgement accuracy;
* contributions to [[Virtual Reality Sickness|VR sickness]] symptoms such as nausea and dizziness.


==Effects and Symptoms==
Susceptibility is **highly individual** but large-scale studies indicate that conflicts < ≈0.5 dioptres (D) in front of the focal plane and < ≈1 D behind it fall inside a broad ‘zone of comfort’ for most viewers.<ref name="Shibata2011" /><ref name="ISO2015" />
The sustained conflict between vergence and accommodation forces the visual system and brain to work harder, potentially leading to a range of negative effects:<ref name="Kramida2016Resolving" /><ref name="Hoffman2008" />
*   '''[[Visual fatigue]] / Eyestrain''': Tired, aching, or burning eyes resulting from the prolonged effort to resolve conflicting cues.
*   '''[[Headache]]s'''.
*   '''Blurred Vision''': Difficulty maintaining sharp focus on virtual objects, especially those perceived as very near or very far relative to the display's fixed focal plane.
*   '''[[Diplopia]] (Double Vision)''': Incorrect vergence responses due to the conflict can sometimes lead to seeing double images.
*  '''Focusing Problems''': Difficulty rapidly refocusing between virtual objects at different apparent depths because the natural reflex is disrupted. Users may also experience lingering focus issues or unusual visual sensations after removing the HMD.
*  '''[[Virtual Reality Sickness|VR Sickness]] / Discomfort''': VAC is considered a significant contributor to symptoms like nausea, dizziness, and general discomfort associated with VR/AR use.
*  '''Reduced Visual Performance''': Measurable degradation in tasks requiring fine depth judgments, reduced reading speed, slower visuomotor reaction times, and increased time required to fuse binocular images.<ref name="Hoffman2008" /><ref name="Lin2022">{{cite web |last=Lin |first=C-J. |last2=Chi |first2=C-F. |last3=Lin |first3=C-K. |last4=Chang |first4=E-C. |title=Effects of Virtual Target Size, Position and Parallax on Vergence–Accommodation Conflict as Estimated by Actual Gaze (Scientific Reports, Vol. 12, Article 20100) |url=https://www.nature.com/articles/s41598-022-24450-9 |year=2022 |access-date= [Insert Access Date Here] }}</ref>
*  '''[[Focal Rivalry]]''': Particularly in AR, the conflict between focusing on a real-world object and a virtual object projected at a different focal distance can make it difficult or impossible to see both sharply simultaneously.
 
The severity of these symptoms varies significantly between individuals and depends on factors such as the magnitude of the VAC (the difference between vergence and accommodation distances), the duration of exposure, the nature of the visual content, and individual visual capabilities. Conflicts below approximately 0.4 to 0.6 [[Diopter|diopters]] are often tolerated, but larger conflicts, especially for near-field virtual objects (within arm's reach), become increasingly problematic.<ref name="Shibata2011" /><ref name="ISO2015" />


==Measurement==
==Measurement==
VAC can be quantified by comparing the optical power (measured in [[Diopter|diopters]], D, which is the reciprocal of distance in meters) required for accommodation versus the optical power corresponding to the vergence distance.<ref name="Kramida2016Resolving" />
<code>VAC (D) = | (/ Accommodation distance) – (/ Vergence distance) |</code>
*  `VAC (Diopters) = | (1 / Accommodation Distance (m)) - (1 / Vergence Distance (m)) |`
 
For example, if an HMD has a fixed focus set to 2 meters (requiring 1/2.0 = 0.5 D of accommodation) and displays a virtual object that appears to be 0.5 meters away (requiring 1/0.5 = 2.0 D of vergence), the VAC magnitude is |0.5 D - 2.0 D| = 1.5 D.
 
==Mitigation Strategies==
Addressing VAC is a major focus of VR and AR research and development. Strategies fall into two main categories: content design and technological solutions.


===Content and Interaction Guidelines===
==Mitigation strategies==
Careful design can minimize VAC-induced discomfort in fixed-focus displays:
===Content & interaction guidelines===
#   '''Limit Depth Range''': Keep critical interactive content and prolonged visual targets within the "zone of comfort," typically corresponding to less than ~0.6 D of VAC for foreground objects (closer than the focal plane) and ~1.0 D for background objects (farther than the focal plane).<ref name="ISO2015" /><ref name="Shibata2011" />
# Keep critical UI and text on, or slightly behind, the display’s focal plane.
#   '''Avoid Rapid Depth Changes''': Avoid sudden disparity jumps (> 1 D) or rapid oscillations in depth for prominent objects. Allow the visual system time (at least 500 ms) to adjust to significant depth changes.<ref name="Kramida2016" />
# Limit positive VAC to < 0.5 D and negative VAC to < 1 D for sustained viewing.<ref name="ISO2015" />
#   '''Optimize UI Placement''': Present user interface elements, text, and critical information at or slightly behind the display's native focal plane where VAC is zero or negative (which is generally better tolerated).<ref name="Shibata2011" />
# Avoid rapid disparity jumps (> 1 D within 0.5 s).
#  '''Simulate Blur''': When hardware cannot provide correct focus cues, incorporate [[Gaze-contingent display|gaze-contingent]] [[Depth of field|depth-of-field]] rendering (simulating blur for objects not being looked at) to provide [[Monocular cues|monocular]] depth information that aligns better with vergence, potentially reducing cue conflicts.<ref name="Koulieris2017">{{cite web |last=Koulieris |first=G-A. |last2=Buhler |first2=K. |last3=Drettakis |first3=G. |last4=Banks |first4=M. S. |title=Accommodation and Comfort in Head-Mounted Displays (ACM SIGGRAPH 2017 Courses) |url=https://dl.acm.org/doi/10.1145/3084873.3084901 |year=2017 |access-date= [Insert Access Date Here] }}</ref>
# Use gaze‑contingent depth‑of‑field blur or foveated rendering to reinforce correct monocular depth cues.<ref name="Koulieris2017" />


===Technological Solutions===
===Technological solutions===
These hardware approaches aim to create displays where the accommodation distance can dynamically match the vergence distance demanded by the virtual content:
{| class="wikitable plainrowheaders"
{| class="wikitable plainrowheaders"
! Approach !! Principle !! Representative prototypes / Research !! Strengths / Limitations
! Approach !! Principle !! Representative prototypes !! Notes
|-
! [[Varifocal display|Varifocal]]
| [[Eye tracking|Eye-tracking]] determines the user's gaze depth, and the display system adjusts a single focal plane to match that depth using [[Tunable lens|tunable lenses]] (e.g., liquid crystal, liquid lens, Alvarez) or mechanically moving components (screen or lens). | Meta Reality Labs Butterscotch Varifocal (2023);<ref name="DisplayDaily2023">{{cite web |title=Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies |url=https://displaydaily.com/metas-going-to-siggraph-2023-and-showing-flamera-and-butterscotch-vr-technologies/ |website=Display Daily |date=2023-08-04 |access-date=[Insert Access Date Here]}}</ref> UNC Wide-FOV deformable-mirror NED.<ref name="Dunn2017">{{cite web |last=Dunn |first=D. |last2=Tippets |first2=C. |last3=Torell |first3=K. |last4=Kellnhofer |first4=P. |last5=Akşit |first5=K. |last6=Didyk |first6=P. |last7=Myszkowski |first7=K. |last8=Luebke |first8=D. |last9=Fuchs |first9=H. |title=Wide Field-of-View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors (IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 23, Issue 4) |url=https://ieeexplore.ieee.org/document/7850947 |year=2017 |access-date=[Insert Access Date Here]}}</ref> | Delivers correct focus cue at the depth of fixation. Challenges include eye-tracking latency and accuracy, depth switching speed, limited depth range, and potentially incorrect blur cues for objects not at the fixation depth.<ref name="UNC2019">{{cite web |title=Dynamic Focus Augmented Reality Display |url=https://telepresence.web.unc.edu/research/dynamic-focus-augmented-reality-display/ |website=UNC Graphics and Virtual Reality Group |year=2019 |access-date=[Insert Access Date Here]}}</ref>
|-
|-
! [[Multifocal display|Multifocal / Multiplane]]
! Varifocal
| Presents images on several fixed focal planes simultaneously (e.g., using stacked LCDs, beam splitters) or time-sequentially. Content is rendered on the plane closest to its virtual depth. | Stanford light-field HMD research;<ref name="Wired2015">{{cite web |last=Zhang |first=S. |title=The Obscure Neuroscience Problem That’s Plaguing VR |url=https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr/ |website=Wired |date=2015-08-11 |access-date=[Insert Access Date Here]}}</ref> Magic Leap 1 (2 planes). | Provides more correct focus cues across multiple depths simultaneously without necessarily requiring eye-tracking. Challenges include complexity, cost, reduced brightness/contrast, potential visible transitions between planes, and limited number of planes.
| Eye‑tracking selects gaze depth; tunable lenses or moving optics shift a *single* focal plane accordingly.
| Meta Reality Labs “Butterscotch” (2023); UNC deformable‑mirror NED (2017).<ref name="MetaButterscotch2023" /><ref name="Dunn2017" />
| Delivers correct focus at fixation depth; challenges include eye‑tracking latency and blur mismatch for peripheral objects.
|-
|-
! [[Light field display|Light Field]]
! Multifocal / multiplane
| Attempts to reconstruct the 4D light field of the scene (rays of light with position and direction). This allows the eye's lens to naturally focus at different depths within the reproduced volume. | Research using lenslet arrays, parallax barriers, holographic optical elements, super-multi-view displays. | Potentially provides true continuous focus cues without eye-tracking. Challenges include extremely high resolution and bandwidth requirements, computational complexity, limited field of view, and tradeoffs between spatial and angular resolution.
| Renders content on two or more discrete focal planes, either simultaneously or time‑multiplexed.
| Magic Leap 1 (two planes); Stanford light‑field prototypes.<ref name="Wired2015" />
| Provides approximate focus cues across depth; added optical complexity and brightness loss.
|-
|-
! [[Holography|Holographic Displays]]
! Light‑field & holographic
| Aims to fully reconstruct the wavefront of light from the virtual scene using diffraction patterns generated by [[Spatial light modulator|spatial light modulators]]. | Research by Microsoft Research, VividQ, Light Field Lab. | Theoretically the ultimate solution, providing all depth cues including accommodation correctly. Challenges include high computational cost ("speckle" noise), limited field of view, and hardware complexity for real-time, high-quality HMDs.
| Reconstruct 4‑D light field or full wavefront so the eye can naturally accommodate within the volume.
| Holographic near‑eye displays (Microsoft; VividQ) remain research prototypes.
| Highest theoretical fidelity but currently limited by compute, resolution and field‑of‑view.
|-
|-
! [[Retinal projection|Retinal Projection / Scanning]]
! Retinal projection / scanning
| Scans modulated light (often laser) directly onto the retina, potentially creating an image that is always in focus regardless of the eye's accommodation state (Maxwellian view). | Research systems; formerly North Focals (acquired by Google). | Can bypass VAC by eliminating the need for accommodation. Challenges include small [[Eyebox|eyebox]], potential for visual artifacts (e.g., [[Floater|floaters]] becoming more visible), safety concerns, and achieving high resolution/FOV.
| Scans modulated light directly onto the retina (Maxwellian view).
| Early commercial attempts (North Focals); research scanners.
| Eliminates accommodation demand but suffers from small eyebox and sparkle/floaters.
|-
|-
! Emerging Optics
! Emerging optics
| Novel optical components like Alvarez freeform lenses,<ref name="Liu2024">{{cite web |last=Liu |first=Y. |last2=Cheng |first2=D. |last3=Wang |first3=Y. |last4=Hua |first4=H. |title=A Varifocal Augmented-Reality Head-Up Display Using Alvarez Freeform Lenses (Journal of the Society for Information Display, Vol. 32, Issue 4) |url=https://onlinelibrary.wiley.com/doi/abs/10.1002/jsid.1286 |year=2024 |access-date=[Insert Access Date Here]}}</ref> tunable fluidic lenses, and deformable membranes are being explored for compact, low-power varifocal or multifocal elements. | Primarily research stage. | Aim for integration into smaller form factors. Manufacturing challenges, response time, optical quality, and control complexity remain active research areas.
| Alvarez freeform lenses, fluidic or MEMS deformable elements for compact varifocal modules.
| Large‑aperture Alvarez AR head‑up display demonstrator (2024).<ref name="Liu2024" />
| Research stage; challenges include manufacturing tolerances and power consumption.
|}
|}


==Temporary Workarounds / Adaptation==
==Temporary workarounds==
*   '''Closing One Eye''': Viewing with only one eye eliminates [[Binocular disparity|binocular disparity]] cues, thereby removing the vergence signal and the conflict. This can sometimes make it easier to focus on virtual objects, particularly near ones, at the cost of losing stereoscopic depth perception.<ref name="Kreylos2014VAC" />
* **Monocular viewing** removes disparity and therefore vergence cues, eliminating VAC at the cost of stereopsis.<ref name="Kreylos2014" />
*   '''Optimal Optical Correction''': Ensuring users wear their correct [[Eyeglass prescription|prescription glasses]] or [[Contact lens|contact lenses]] minimizes any additional strain on the visual system from uncorrected [[Refractive error|refractive errors]].
* Wearing up‑to‑date refractive correction prevents additional accommodative effort.
*   '''Adaptation''': The visual system can exhibit some adaptation to VAC during prolonged use, temporarily weakening the coupling between vergence and accommodation.<ref name="Kreylos2014VAC" /> However, this adaptation might be slow, incomplete, and can lead to the lingering aftereffects mentioned earlier when returning to natural viewing conditions.
* Adaptation: some users partially adapt to VAC over tens of minutes, but after‑effects (e.g., shift in resting focus) can persist for several hours.<ref name="Hoffman2008" />


==Current Research Frontiers==
==Current research frontiers==
*   '''High-Resolution Varifocal Displays''': Prototypes like Meta’s Butterscotch demonstrate progress towards retinal resolution (e.g., 60 pixels per degree) combined with reasonably fast depth switching, suggesting potential commercial viability.<ref name="DisplayDaily2023" />
* **High‑resolution varifocal HMDs** integrating retinal‑resolution displays with low‑latency depth actuation.<ref name="MetaButterscotch2023" />
*   '''Focus-Correct Passthrough AR''': Integrating varifocal or multifocal optics into [[Video passthrough|video-see-through]] AR systems to correctly render both real-world and virtual imagery at appropriate focal depths.<ref name="UNC2019" />
* **Focus‑correct passthrough AR** combining dynamic‑focus optics with camera‑based passthrough to align both real and virtual imagery.<ref name="Dunn2017" />
*   '''Standards and Health Implications''': Ongoing work by standards bodies (e.g., ISO TC159, IEC TC100) to develop guidelines for extended VR/AR use, particularly concerning children and workplace applications.
* **Individualised comfort models** derived from eye‑tracking and psychophysical data to adapt depth budgets per user.<ref name="Lin2022" />
*  '''Perceptual Modeling''': Research using large-sample studies to better understand individual variability in the accommodation-vergence relationship, potentially enabling personalized comfort settings or adaptive display parameters.<ref name="Lin2022" />


==See Also==
==See also==
*   [[Accommodation (eye)|Accommodation]]
* [[Accommodation (eye)]]
*   [[Depth perception]]
* [[Depth perception]]
*   [[Eye tracking]]
* [[Eye tracking]]
*   [[Head-Mounted Display]]
* [[Head-Mounted Display]]
*   [[Light field display]]
* [[Light field display]]
*   [[Stereoscopy]]
* [[Visual fatigue]]
*  [[Varifocal display]]
* [[Virtual reality sickness]]
*  [[Vergence]]
[[Visual fatigue]]
*   [[Virtual Reality Sickness]]


==References==
==References==
<references />
<references>
<ref name="Hoffman2008">{{cite journal |last1=Hoffman |first1=David M. |last2=Girshick |first2=Ahna R. |last3=Akeley |first3=Kurt |last4=Banks |first4=Martin S. |title=Vergence–accommodation conflicts hinder visual performance and cause visual fatigue |journal=Journal of Vision |volume=8 |issue=3 |pages=33 |year=2008 |doi=10.1167/8.3.33 |access-date=2025-04-27}}</ref>
<ref name="Shibata2011">{{cite journal |last1=Shibata |first1=Takayuki |last2=Kim |first2=Joohwan |last3=Hoffman |first3=David M. |last4=Banks |first4=Martin S. |title=The zone of comfort: predicting visual discomfort with stereo displays |journal=Journal of Vision |volume=11 |issue=8 |pages=11 |year=2011 |doi=10.1167/11.8.11 |access-date=2025-04-27}}</ref>
<ref name="Kramida2016">{{cite journal |last=Kramida |first=Gregory |title=Resolving the vergence–accommodation conflict in head‑mounted displays |journal=IEEE Transactions on Visualization and Computer Graphics |volume=22 |issue=7 |pages=1912–1931 |year=2016 |doi=10.1109/TVCG.2016.2535300 |access-date=2025-04-27}}</ref>
<ref name="Koulieris2017">{{cite journal |last1=Koulieris |first1=George‑Alex |last2=Bui |first2=Bee |last3=Banks |first3=Martin S. |last4=Drettakis |first4=George |title=Accommodation and comfort in head‑mounted displays |journal=ACM Transactions on Graphics |volume=36 |issue=4 |pages=87 |year=2017 |doi=10.1145/3072959.3073622 |access-date=2025-04-27}}</ref>
<ref name="Zhou2021">{{cite journal |last1=Zhou |first1=Yujia |last2=Li |first2=Xuan |last3=Yuan |first3=Chang |title=Vergence–accommodation conflict in optical see‑through display: review and prospect |journal=Results in Optics |volume=5 |pages=100160 |year=2021 |doi=10.1016/j.rio.2021.100160 |access-date=2025-04-27}}</ref>
<ref name="Lin2022">{{cite journal |last1=Lin |first1=Chia‑Jung |last2=Chi |first2=Chien‑Fu |last3=Lin |first3=Chih‑Kang |last4=Chang |first4=En‑Chuan |title=Effects of virtual target size, position and parallax on vergence–accommodation conflict as estimated by actual gaze |journal=Scientific Reports |volume=12 |pages=20100 |year=2022 |doi=10.1038/s41598-022-24450-9 |access-date=2025-04-27}}</ref>
<ref name="ISO2015">{{cite report |title=ISO 9241‑392:2015 Ergonomics of human‑system interaction — Part 392: Reduction of visual fatigue from stereoscopic images |publisher=International Organization for Standardization |year=2015 |access-date=2025-04-27}}</ref>
<ref name="Dunn2017">{{cite journal |last1=Dunn |first1=Damon |last2=Tippets |first2=Caleb |last3=Torell |first3=Kevin |last4=Kellnhofer |first4=Philipp |last5=Akşit |first5=Kaan |last6=Didyk |first6=Piotr |last7=Myszkowski |first7=Karol |last8=Luebke |first8=David |last9=Fuchs |first9=Henry |title=Wide field‑of‑view varifocal near‑eye display using see‑through deformable membrane mirrors |journal=IEEE Transactions on Visualization and Computer Graphics |volume=23 |issue=4 |pages=1322–1331 |year=2017 |doi=10.1109/TVCG.2017.2657138 |access-date=2025-04-27}}</ref>
<ref name="MetaButterscotch2023">{{cite web |title=Demo or die: How Reality Labs’ display systems research team is tackling the biggest challenges in VR |url=https://www.meta.com/blog/reality-labs-research-display-systems-siggraph-2023-butterscotch-varifocal-flamera/ |website=Reality Labs Research Blog |publisher=Meta Platforms |date=2023-08-04 |access-date=2025-04-27}}</ref>
<ref name="Liu2024">{{cite journal |last1=Liu |first1=Yujin |last2=Cheng |first2=Dongli |last3=Wang |first3=Yuan |last4=Hua |first4=Hong |title=A varifocal augmented‑reality head‑up display using Alvarez freeform lenses |journal=Journal of the Society for Information Display |volume=32 |issue=4 |pages=231–240 |year=2024 |doi=10.1002/jsid.1286 |access-date=2025-04-27}}</ref>
<ref name="Wired2015">{{cite news |last=Zhang |first=Sarah |title=The obscure neuroscience problem that’s plaguing VR |work=Wired |date=2015-08-11 |url=https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr |access-date=2025-04-27}}</ref>
<ref name="Kreylos2014">{{cite web |last=Kreylos |first=Oliver |title=Accommodation and vergence in head‑mounted displays |website=Doc‑Ok.org |date=2014-04-13 |url=http://doc-ok.org/?p=1602 |access-date=2025-04-27}}</ref>
</references>


[[Category:Terms]]
[[Category:Terms]]