Jump to content

Vergence-accommodation conflict: Difference between revisions

Undo revision 34595 by Xinreality (talk)
Tag: Undo
No edit summary
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{see also|Terms|Technical Terms}}
{{see also|Terms|Technical Terms}}
'''Vergence-accommodation conflict''' ('''VAC'''), also known as '''accommodation-vergence conflict''' or sometimes '''accommodation–vergence mismatch''', is a visual and perceptual phenomenon that occurs when the [[Brain|brain]] receives mismatching cues between the distance to which the eyes are pointed or converged ([[Vergence|vergence]]) and the distance at which the eyes' lenses are focused ([[Accommodation (eye)|accommodation]]).<ref name="Hoffman2008">{{cite web |last=Hoffman |first=D. M. |last2=Girshick |first2=A. R. |last3=Akeley |first3=K. |last4=Banks |first4=M. S. |title=Vergence–Accommodation Conflicts Hinder Visual Performance and Cause Visual Fatigue (Journal of Vision, Vol. 8, Issue 3, Article 33) |url=https://jov.arvojournals.org/article.aspx?articleid=2192424 |year=2008 |access-date= [Insert Access Date Here] }}</ref><ref name="Kreylos2014VAC">{{cite web |last=Kreylos |first=Oliver |title=Accommodation and Vergence in Head-mounted Displays |url=http://doc-ok.org/?p=1602 |website=Doc-Ok.org |date=2014-04-13 |access-date= [Insert Access Date Here] }}</ref> Because natural viewing conditions tightly couple these two mechanisms, breaking that link is a primary cause of visual discomfort and performance issues in modern [[Virtual reality|virtual reality]] (VR), [[Augmented reality|augmented reality]] (AR), and other [[Stereoscopy|stereoscopic]] 3-D displays, including nearly all mainstream [[Head-Mounted Display|head-mounted displays]] (HMDs).<ref name="Hoffman2008" />
'''Vergence-accommodation conflict''' ('''VAC'''), also known as '''accommodation-vergence conflict''' or sometimes '''accommodation-vergence mismatch''', is a visual and perceptual phenomenon that occurs when the [[Brain|brain]] receives mismatching cues between the distance to which the eyes are pointed or converged ([[Vergence|vergence]]) and the distance at which the eyes' lenses are focused ([[Accommodation (eye)|accommodation]]).<ref name="Hoffman2008">{{cite web |last=Hoffman |first=D. M. |last2=Girshick |first2=A. R. |last3=Akeley |first3=K. |last4=Banks |first4=M. S. |title=Vergence-Accommodation Conflicts Hinder Visual Performance and Cause Visual Fatigue |journal=Journal of Vision |volume=8 |issue=3 |pages=33 |url=https://jov.arvojournals.org/article.aspx?articleid=2192424 |year=2008}}</ref><ref name="Kreylos2014VAC">{{cite web |last=Kreylos |first=Oliver |title=Accommodation and Vergence in Head-mounted Displays |url=http://doc-ok.org/?p=1602 |website=Doc-Ok.org |date=2014-04-13}}</ref> Because natural viewing conditions tightly couple these two mechanisms, breaking that link is a primary cause of visual discomfort and performance issues in modern [[Virtual reality|virtual reality]] (VR), [[Augmented reality|augmented reality]] (AR), and other [[Stereoscopy|stereoscopic]] 3-D displays, including nearly all mainstream [[Head-Mounted Display|head-mounted displays]] (HMDs).<ref name="Hoffman2008" />


==Physiological Basis==
==Physiological Basis==
Line 7: Line 7:
*  '''[[Accommodation (eye)|Accommodation]]''': The [[Ciliary muscle|ciliary muscle]] adjusts the shape and thus the [[Optical power|optical power]] of the [[Crystalline lens|crystalline lens]] within each eye to bring the image of the target object into sharp focus on the [[Retina|retina]]. This response is primarily driven by retinal blur.
*  '''[[Accommodation (eye)|Accommodation]]''': The [[Ciliary muscle|ciliary muscle]] adjusts the shape and thus the [[Optical power|optical power]] of the [[Crystalline lens|crystalline lens]] within each eye to bring the image of the target object into sharp focus on the [[Retina|retina]]. This response is primarily driven by retinal blur.


In natural vision, these two systems are tightly linked through fast, reciprocal neurological signals known as the [[Accommodation reflex|accommodation–vergence reflex]].<ref name="Kreylos2014VAC" /><ref name="Kramida2016">{{cite web |last=Kramida |first=G. |title=Resolving the Vergence–Accommodation Conflict in Head-Mounted Displays (IEEE Transactions on Visualization and Computer Graphics, Vol. 22, Issue 7) |url=https://ieeexplore.ieee.org/document/7296633 |year=2016 |access-date= [Insert Access Date Here] }}</ref> This coupling ensures that the eyes focus at the same distance they are pointed, allowing for clear, comfortable, and efficient vision. Stereoscopic displays disrupt this natural coupling because binocular disparity cues drive the vergence system to the ''simulated'' depth of a virtual object, while the accommodation system is driven by blur cues to focus on the ''physical'' display surface, which is typically at a fixed optical distance.<ref name="Kramida2016Resolving">{{cite web |last=Kramida |first=Gregory |last2=Varshney |first2=Amitabh |title=Resolving the Vergence-Accommodation Conflict in Head Mounted Displays |url=https://www.cs.umd.edu/sites/default/files/scholarly_papers/Kramidarev.pdf |website=Department of Computer Science, University of Maryland |year=2016 |access-date= [Insert Access Date Here] }}</ref>
In natural vision, these two systems are tightly linked through fast, reciprocal neurological signals known as the [[Accommodation reflex|accommodation-vergence reflex]].<ref name="Kreylos2014VAC" /><ref name="Kramida2016">{{cite web |last=Kramida |first=G. |title=Resolving the Vergence-Accommodation Conflict in Head-Mounted Displays |journal=IEEE Transactions on Visualization and Computer Graphics |volume=22 |issue=7 |pages=1912-1921 |url=https://ieeexplore.ieee.org/document/7296633 |year=2016}}</ref> This coupling ensures that the eyes focus at the same distance they are pointed, allowing for clear, comfortable, and efficient vision. Stereoscopic displays disrupt this natural coupling because binocular disparity cues drive the vergence system to the ''simulated'' depth of a virtual object, while the accommodation system is driven by blur cues to focus on the ''physical'' display surface, which is typically at a fixed optical distance.<ref name="Kramida2016" />


==Causes / Occurrence in Display Technologies==
==Causes / Occurrence in Display Technologies==
The vergence-accommodation conflict is inherent in display technologies where the perceived depth of content differs from the physical or optical distance of the display surface:
The vergence-accommodation conflict is inherent in display technologies where the perceived depth of content differs from the physical or optical distance of the display surface:


*  '''Fixed-focus HMDs''': Nearly all consumer VR and many AR headsets use lenses to place a virtual image of the display screens at a fixed focal distance, typically between 1.3 and 2 meters (though this varies).<ref name="Kreylos2013HMD">{{cite web |last=Kreylos |first=Oliver |title=Head-mounted Displays and Lenses |url=http://doc-ok.org/?p=1360 |website=Doc-Ok.org |date=2013-07-24 |access-date= [Insert Access Date Here] }}</ref> Viewers must accommodate to this fixed plane to see a sharp image. However, stereoscopic rendering creates virtual objects that appear at various depths, requiring vergence changes. Objects rendered virtually nearer than the fixed focal plane induce a ''positive VAC'' (eyes converge more than they accommodate), while objects rendered virtually farther induce a ''negative VAC'' (eyes converge less than they accommodate).<ref name="Shibata2011">{{cite web |last=Shibata |first=T. |last2=Kim |first2=J. |last3=Hoffman |first3=D. M. |last4=Banks |first4=M. S. |title=The Zone of Comfort: Predicting Visual Discomfort With Stereo Displays (Journal of Vision, Vol. 11, Issue 8, Article 11) |url=https://jov.arvojournals.org/article.aspx?articleid=2192969 |year=2011 |access-date= [Insert Access Date Here] }}</ref>
*  '''Fixed-focus HMDs''': Nearly all consumer VR and many AR headsets use internal display screens (like OLED or LCD) viewed through lenses. These lenses create a [[virtual image]] of the screens, making them appear to be located at a fixed focal distance, typically between 1.3 and 2 meters (though this varies).<ref name="Kreylos2013HMD">{{cite web |last=Kreylos |first=Oliver |title=Head-mounted Displays and Lenses |url=http://doc-ok.org/?p=1360 |website=Doc-Ok.org |date=2013-07-24}}</ref> Consequently, viewers must accommodate (focus their eyes) to this fixed plane to see a sharp image, regardless of the perceived depth of virtual objects. However, stereoscopic rendering creates virtual objects that appear at various depths by presenting slightly different images to each eye, requiring the viewer's eyes to converge or diverge (vergence). Objects rendered virtually nearer than the fixed focal plane induce a ''positive VAC'' (eyes converge more than they accommodate), while objects rendered virtually farther induce a ''negative VAC'' (eyes converge less than they accommodate).<ref name="Shibata2011">{{cite web |last=Shibata |first=T. |last2=Kim |first2=J. |last3=Hoffman |first3=D. M. |last4=Banks |first4=M. S. |title=The Zone of Comfort: Predicting Visual Discomfort With Stereo Displays |journal=Journal of Vision |volume=11 |issue=8 |pages=11 |url=https://jov.arvojournals.org/article.aspx?articleid=2192969 |year=2011}}</ref>
*  '''[[3D television|3D Cinema and Television]]''': VAC also occurs here, but symptoms are often milder. The screen is typically farther away, the [[Field of view|field of view]] is smaller, and content creators can limit disparities to keep virtual objects within a "zone of comfort" relative to the screen distance.<ref name="ISO2015">ISO 9241-392:2015, Ergonomics of human-system interaction — Part 392: Ergonomic requirements for the reduction of visual fatigue from stereoscopic images. International Organization for Standardization, 2015.</ref>
*  '''[[3D television|3D Cinema and Television]]''': VAC also occurs here, but symptoms are often milder. The screen is typically farther away, the [[Field of view|field of view]] is smaller, and content creators can limit disparities to keep virtual objects within a "zone of comfort" relative to the screen distance.<ref name="ISO2015">ISO 9241-392:2015, Ergonomics of human-system interaction — Part 392: Ergonomic requirements for the reduction of visual fatigue from stereoscopic images. International Organization for Standardization, 2015.</ref>
*  '''[[Optical see-through display|Optical See-Through (OST) AR]]''': In OST AR glasses, virtual images (often at a fixed focus) are overlaid onto the real world. This creates a conflict not only between vergence and accommodation for virtual objects but also a potential mismatch between focusing on real-world objects at various distances and the fixed focus of the virtual overlay. This can introduce depth discontinuities, reduce the perceived registration accuracy of virtual objects, and cause discomfort.<ref name="Zhou2021">{{cite web |last=Zhou |first=Y. |last2=Li |first2=X. |last3=Yuan |first3=C. |title=Vergence-Accommodation Conflict in Optical See-Through Display: Review and Prospect (Results in Optics, Vol. 5) |url=https://www.sciencedirect.com/science/article/pii/S266717002100160X |year=2021 |access-date= [Insert Access Date Here] }}</ref>
*  '''[[Optical see-through display|Optical See-Through (OST) AR]]''': In OST AR glasses, virtual images (often at a fixed focus) are overlaid onto the real world. This creates a conflict not only between vergence and accommodation for virtual objects but also a potential mismatch between focusing on real-world objects at various distances and the fixed focus of the virtual overlay. This can introduce depth discontinuities, reduce the perceived registration accuracy of virtual objects, and cause discomfort.<ref name="Zhou2021">{{cite web |last=Zhou |first=Y. |last2=Li |first2=X. |last3=Yuan |first3=C. |title=Vergence-Accommodation Conflict in Optical See-Through Display: Review and Prospect |journal=Results in Optics |volume=5 |pages=100160 |url=https://www.sciencedirect.com/science/article/pii/S266717002100160X |year=2021}}</ref>


==Effects and Symptoms==
==Effects and Symptoms==
The sustained conflict between vergence and accommodation forces the visual system and brain to work harder, potentially leading to a range of negative effects:<ref name="Kramida2016Resolving" /><ref name="Hoffman2008" />
The sustained conflict between vergence and accommodation forces the visual system and brain to work harder, potentially leading to a range of negative effects:<ref name="Kramida2016" /><ref name="Hoffman2008" />
*  '''[[Visual fatigue]] / Eyestrain''': Tired, aching, or burning eyes resulting from the prolonged effort to resolve conflicting cues.
*  '''[[Visual fatigue]] / Eyestrain''': Tired, aching, or burning eyes resulting from the prolonged effort to resolve conflicting cues.
*  '''[[Headache]]s'''.
*  '''[[Headache]]s'''.
Line 24: Line 24:
*  '''Focusing Problems''': Difficulty rapidly refocusing between virtual objects at different apparent depths because the natural reflex is disrupted. Users may also experience lingering focus issues or unusual visual sensations after removing the HMD.
*  '''Focusing Problems''': Difficulty rapidly refocusing between virtual objects at different apparent depths because the natural reflex is disrupted. Users may also experience lingering focus issues or unusual visual sensations after removing the HMD.
*  '''[[Virtual Reality Sickness|VR Sickness]] / Discomfort''': VAC is considered a significant contributor to symptoms like nausea, dizziness, and general discomfort associated with VR/AR use.
*  '''[[Virtual Reality Sickness|VR Sickness]] / Discomfort''': VAC is considered a significant contributor to symptoms like nausea, dizziness, and general discomfort associated with VR/AR use.
*  '''Reduced Visual Performance''': Measurable degradation in tasks requiring fine depth judgments, reduced reading speed, slower visuomotor reaction times, and increased time required to fuse binocular images.<ref name="Hoffman2008" /><ref name="Lin2022">{{cite web |last=Lin |first=C-J. |last2=Chi |first2=C-F. |last3=Lin |first3=C-K. |last4=Chang |first4=E-C. |title=Effects of Virtual Target Size, Position and Parallax on Vergence–Accommodation Conflict as Estimated by Actual Gaze (Scientific Reports, Vol. 12, Article 20100) |url=https://www.nature.com/articles/s41598-022-24450-9 |year=2022 |access-date= [Insert Access Date Here] }}</ref>
*  '''Reduced Visual Performance''': Measurable degradation in tasks requiring fine depth judgments, reduced reading speed, slower visuomotor reaction times, and increased time required to fuse binocular images.<ref name="Hoffman2008" /><ref name="Lin2022">{{cite web |last=Lin |first=C-J. |last2=Chi |first2=C-F. |last3=Lin |first3=C-K. |last4=Chang |first4=E-C. |title=Effects of Virtual Target Size, Position and Parallax on Vergence-Accommodation Conflict as Estimated by Actual Gaze |journal=Scientific Reports |volume=12 |pages=20100 |url=https://www.nature.com/articles/s41598-022-24450-9 |year=2022}}</ref>
*  '''[[Focal Rivalry]]''': Particularly in AR, the conflict between focusing on a real-world object and a virtual object projected at a different focal distance can make it difficult or impossible to see both sharply simultaneously.
*  '''[[Focal Rivalry]]''': Particularly in AR, the conflict between focusing on a real-world object and a virtual object projected at a different focal distance can make it difficult or impossible to see both sharply simultaneously.


Line 30: Line 30:


==Measurement==
==Measurement==
VAC can be quantified by comparing the optical power (measured in [[Diopter|diopters]], D, which is the reciprocal of distance in meters) required for accommodation versus the optical power corresponding to the vergence distance.<ref name="Kramida2016Resolving" />
VAC can be quantified by comparing the optical power (measured in [[Diopter|diopters]], D, which is the reciprocal of distance in meters) required for accommodation versus the optical power corresponding to the vergence distance.<ref name="Kramida2016" />
*  `VAC (Diopters) = | (1 / Accommodation Distance (m)) - (1 / Vergence Distance (m)) |`
*  `VAC (Diopters) = | (1 / Accommodation Distance (m)) - (1 / Vergence Distance (m)) |`


Line 43: Line 43:
#  '''Avoid Rapid Depth Changes''': Avoid sudden disparity jumps (> 1 D) or rapid oscillations in depth for prominent objects. Allow the visual system time (at least 500 ms) to adjust to significant depth changes.<ref name="Kramida2016" />
#  '''Avoid Rapid Depth Changes''': Avoid sudden disparity jumps (> 1 D) or rapid oscillations in depth for prominent objects. Allow the visual system time (at least 500 ms) to adjust to significant depth changes.<ref name="Kramida2016" />
#  '''Optimize UI Placement''': Present user interface elements, text, and critical information at or slightly behind the display's native focal plane where VAC is zero or negative (which is generally better tolerated).<ref name="Shibata2011" />
#  '''Optimize UI Placement''': Present user interface elements, text, and critical information at or slightly behind the display's native focal plane where VAC is zero or negative (which is generally better tolerated).<ref name="Shibata2011" />
#  '''Simulate Blur''': When hardware cannot provide correct focus cues, incorporate [[Gaze-contingent display|gaze-contingent]] [[Depth of field|depth-of-field]] rendering (simulating blur for objects not being looked at) to provide [[Monocular cues|monocular]] depth information that aligns better with vergence, potentially reducing cue conflicts.<ref name="Koulieris2017">{{cite web |last=Koulieris |first=G-A. |last2=Buhler |first2=K. |last3=Drettakis |first3=G. |last4=Banks |first4=M. S. |title=Accommodation and Comfort in Head-Mounted Displays (ACM SIGGRAPH 2017 Courses) |url=https://dl.acm.org/doi/10.1145/3084873.3084901 |year=2017 |access-date= [Insert Access Date Here] }}</ref>
#  '''Simulate Blur''': When hardware cannot provide correct focus cues, incorporate [[Gaze-contingent display|gaze-contingent]] [[Depth of field|depth-of-field]] rendering (simulating blur for objects not being looked at) to provide [[Monocular cues|monocular]] depth information that aligns better with vergence, potentially reducing cue conflicts.<ref name="Koulieris2017">{{cite web |last=Koulieris |first=G-A. |last2=Buhler |first2=K. |last3=Drettakis |first3=G. |last4=Banks |first4=M. S. |title=Accommodation and Comfort in Head-Mounted Displays |booktitle=ACM SIGGRAPH 2017 Courses |url=https://dl.acm.org/doi/10.1145/3084873.3084901 |year=2017}}</ref>


===Technological Solutions===
===Technological Solutions===
Line 51: Line 51:
|-
|-
! [[Varifocal display|Varifocal]]
! [[Varifocal display|Varifocal]]
| [[Eye tracking|Eye-tracking]] determines the user's gaze depth, and the display system adjusts a single focal plane to match that depth using [[Tunable lens|tunable lenses]] (e.g., liquid crystal, liquid lens, Alvarez) or mechanically moving components (screen or lens). | Meta Reality Labs Butterscotch Varifocal (2023);<ref name="DisplayDaily2023">{{cite web |title=Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies |url=https://displaydaily.com/metas-going-to-siggraph-2023-and-showing-flamera-and-butterscotch-vr-technologies/ |website=Display Daily |date=2023-08-04 |access-date=[Insert Access Date Here]}}</ref> UNC Wide-FOV deformable-mirror NED.<ref name="Dunn2017">{{cite web |last=Dunn |first=D. |last2=Tippets |first2=C. |last3=Torell |first3=K. |last4=Kellnhofer |first4=P. |last5=Akşit |first5=K. |last6=Didyk |first6=P. |last7=Myszkowski |first7=K. |last8=Luebke |first8=D. |last9=Fuchs |first9=H. |title=Wide Field-of-View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors (IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 23, Issue 4) |url=https://ieeexplore.ieee.org/document/7850947 |year=2017 |access-date=[Insert Access Date Here]}}</ref> | Delivers correct focus cue at the depth of fixation. Challenges include eye-tracking latency and accuracy, depth switching speed, limited depth range, and potentially incorrect blur cues for objects not at the fixation depth.<ref name="UNC2019">{{cite web |title=Dynamic Focus Augmented Reality Display |url=https://telepresence.web.unc.edu/research/dynamic-focus-augmented-reality-display/ |website=UNC Graphics and Virtual Reality Group |year=2019 |access-date=[Insert Access Date Here]}}</ref>
| [[Eye tracking|Eye-tracking]] determines the user's gaze depth, and the display system adjusts a single focal plane to match that depth using [[Tunable lens|tunable lenses]] (for example liquid crystal, liquid lens, Alvarez) or mechanically moving components (screen or lens). | Meta Reality Labs Butterscotch Varifocal (2023);<ref name="DisplayDaily2023">{{cite web |title=Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies |url=https://displaydaily.com/metas-going-to-siggraph-2023-and-showing-flamera-and-butterscotch-vr-technologies/ |website=Display Daily |date=2023-08-04}}</ref> UNC Wide-FOV deformable-mirror NED.<ref name="Dunn2017">{{cite web |last=Dunn |first=D. |last2=Tippets |first2=C. |last3=Torell |first3=K. |last4=Kellnhofer |first4=P. |last5=Akşit |first5=K. |last6=Didyk |first6=P. |last7=Myszkowski |first7=K. |last8=Luebke |first8=D. |last9=Fuchs |first9=H. |title=Wide Field-of-View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors |journal=IEEE Transactions on Visualization and Computer Graphics (TVCG) |volume=23 |issue=4 |pages=1411-1420 |url=https://ieeexplore.ieee.org/document/7850947 |year=2017}}</ref> | Delivers correct focus cue at the depth of fixation. Challenges include eye-tracking latency and accuracy, depth switching speed, limited depth range, and potentially incorrect blur cues for objects not at the fixation depth.<ref name="UNC2019">{{cite web |title=Dynamic Focus Augmented Reality Display |url=https://telepresence.web.unc.edu/research/dynamic-focus-augmented-reality-display/ |website=UNC Graphics and Virtual Reality Group |year=2019}}</ref>
|-
|-
! [[Multifocal display|Multifocal / Multiplane]]
! [[Multifocal display|Multifocal / Multiplane]]
| Presents images on several fixed focal planes simultaneously (e.g., using stacked LCDs, beam splitters) or time-sequentially. Content is rendered on the plane closest to its virtual depth. | Stanford light-field HMD research;<ref name="Wired2015">{{cite web |last=Zhang |first=S. |title=The Obscure Neuroscience Problem That’s Plaguing VR |url=https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr/ |website=Wired |date=2015-08-11 |access-date=[Insert Access Date Here]}}</ref> Magic Leap 1 (2 planes). | Provides more correct focus cues across multiple depths simultaneously without necessarily requiring eye-tracking. Challenges include complexity, cost, reduced brightness/contrast, potential visible transitions between planes, and limited number of planes.
| Presents images on several fixed focal planes simultaneously (for example using stacked LCDs, beam splitters) or time-sequentially. Content is rendered on the plane closest to its virtual depth. | Stanford light-field HMD research;<ref name="Wired2015">{{cite web |last=Zhang |first=S. |title=The Obscure Neuroscience Problem That’s Plaguing VR |url=https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr/ |website=Wired |date=2015-08-11}}</ref> Magic Leap 1 (2 planes). | Provides more correct focus cues across multiple depths simultaneously without necessarily requiring eye-tracking. Challenges include complexity, cost, reduced brightness/contrast, potential visible transitions between planes, and limited number of planes.
|-
|-
! [[Light field display|Light Field]]
! [[Light field display|Light Field]]
Line 63: Line 63:
|-
|-
! [[Retinal projection|Retinal Projection / Scanning]]
! [[Retinal projection|Retinal Projection / Scanning]]
| Scans modulated light (often laser) directly onto the retina, potentially creating an image that is always in focus regardless of the eye's accommodation state (Maxwellian view). | Research systems; formerly North Focals (acquired by Google). | Can bypass VAC by eliminating the need for accommodation. Challenges include small [[Eyebox|eyebox]], potential for visual artifacts (e.g., [[Floater|floaters]] becoming more visible), safety concerns, and achieving high resolution/FOV.
| Scans modulated light (often laser) directly onto the retina, potentially creating an image that is always in focus regardless of the eye's accommodation state (Maxwellian view). | Research systems; formerly North Focals (acquired by Google). | Can bypass VAC by eliminating the need for accommodation. Challenges include small [[Eyebox|eyebox]], potential for visual artifacts (for example [[Floater|floaters]] becoming more visible), safety concerns, and achieving high resolution/FOV.
|-
|-
! Emerging Optics
! Emerging Optics
| Novel optical components like Alvarez freeform lenses,<ref name="Liu2024">{{cite web |last=Liu |first=Y. |last2=Cheng |first2=D. |last3=Wang |first3=Y. |last4=Hua |first4=H. |title=A Varifocal Augmented-Reality Head-Up Display Using Alvarez Freeform Lenses (Journal of the Society for Information Display, Vol. 32, Issue 4) |url=https://onlinelibrary.wiley.com/doi/abs/10.1002/jsid.1286 |year=2024 |access-date=[Insert Access Date Here]}}</ref> tunable fluidic lenses, and deformable membranes are being explored for compact, low-power varifocal or multifocal elements. | Primarily research stage. | Aim for integration into smaller form factors. Manufacturing challenges, response time, optical quality, and control complexity remain active research areas.
| Novel optical components like Alvarez freeform lenses,<ref name="Liu2024">{{cite web |last=Liu |first=Y. |last2=Cheng |first2=D. |last3=Wang |first3=Y. |last4=Hua |first4=H. |title=A Varifocal Augmented-Reality Head-Up Display Using Alvarez Freeform Lenses |journal=Journal of the Society for Information Display |volume=32 |issue=4 |pages=310-320 |url=https://onlinelibrary.wiley.com/doi/abs/10.1002/jsid.1286 |year=2024}}</ref> tunable fluidic lenses, and deformable membranes are being explored for compact, low-power varifocal or multifocal elements. | Primarily research stage. | Aim for integration into smaller form factors. Manufacturing challenges, response time, optical quality, and control complexity remain active research areas.
|}
|}


Line 75: Line 75:


==Current Research Frontiers==
==Current Research Frontiers==
*  '''High-Resolution Varifocal Displays''': Prototypes like Meta’s Butterscotch demonstrate progress towards retinal resolution (e.g., 60 pixels per degree) combined with reasonably fast depth switching, suggesting potential commercial viability.<ref name="DisplayDaily2023" />
*  '''High-Resolution Varifocal Displays''': Prototypes like Meta’s Butterscotch demonstrate progress towards retinal resolution (for example 60 pixels per degree) combined with reasonably fast depth switching, suggesting potential commercial viability.<ref name="DisplayDaily2023" />
*  '''Focus-Correct Passthrough AR''': Integrating varifocal or multifocal optics into [[Video passthrough|video-see-through]] AR systems to correctly render both real-world and virtual imagery at appropriate focal depths.<ref name="UNC2019" />
*  '''Focus-Correct Passthrough AR''': Integrating varifocal or multifocal optics into [[Video passthrough|video-see-through]] AR systems to correctly render both real-world and virtual imagery at appropriate focal depths.<ref name="UNC2019" />
*  '''Standards and Health Implications''': Ongoing work by standards bodies (e.g., ISO TC159, IEC TC100) to develop guidelines for extended VR/AR use, particularly concerning children and workplace applications.
*  '''Standards and Health Implications''': Ongoing work by standards bodies (for example ISO TC159, IEC TC100) to develop guidelines for extended VR/AR use, particularly concerning children and workplace applications.
*  '''Perceptual Modeling''': Research using large-sample studies to better understand individual variability in the accommodation-vergence relationship, potentially enabling personalized comfort settings or adaptive display parameters.<ref name="Lin2022" />
*  '''Perceptual Modeling''': Research using large-sample studies to better understand individual variability in the accommodation-vergence relationship, potentially enabling personalized comfort settings or adaptive display parameters.<ref name="Lin2022" />


Line 96: Line 96:


[[Category:Terms]]
[[Category:Terms]]
[[Category:Technical Terms]]
[[Category:Vision]]
[[Category:Vision]]
[[Category:Physiology]]
[[Category:Physiology]]