Jump to content

Depth cue: Difference between revisions

m Text replacement - "e.g.," to "for example"
Tags: Mobile edit Mobile web edit
No edit summary
Line 15: Line 15:


===[[Binocular Disparity]] (Stereopsis)===
===[[Binocular Disparity]] (Stereopsis)===
Because the two eyes are horizontally separated (by the [[interpupillary distance]], or IPD, typically around 6-7 cm), they receive slightly different images of the world. This difference in the image location of an object seen by the left and right eyes is called '''binocular disparity'''. The brain's visual cortex processes this disparity to generate the perception of depth, a phenomenon known as '''[[stereopsis]]'''. <ref name="BlakeWilson2011">Blake, R., & Wilson, H. R. (2011). Binocular vision. *Vision Research, 51*(7), 754–770. doi:10.1016/j.visres.2010.10.009</ref> <ref name="ParkerStereo2007">Parker, Andrew J. (2007). Binocular depth perception and the cerebral cortex. *Nature Reviews Neuroscience, 8*(5), 379-391.</ref> VR headsets exploit this by presenting a separate image with the correct perspective offset to each eye, simulating the natural disparity an observer would experience. It is an especially powerful depth cue for near to mid-range distances. <ref name="HITLCues1"/>
Because the two eyes are horizontally separated (by the [[interpupillary distance]], or IPD, typically around 6-7 cm), they receive slightly different images of the world. This difference in the image location of an object seen by the left and right eyes is called '''binocular disparity'''. The brain's visual cortex processes this disparity to generate the perception of depth, a phenomenon known as '''[[stereopsis]]'''. <ref name="BlakeWilson2011">Blake, R., & Wilson, H. R. (2011). Binocular vision. *Vision Research, 51*(7), 754-770. doi:10.1016/j.visres.2010.10.009</ref> <ref name="ParkerStereo2007">Parker, Andrew J. (2007). Binocular depth perception and the cerebral cortex. *Nature Reviews Neuroscience, 8*(5), 379-391.</ref> VR headsets exploit this by presenting a separate image with the correct perspective offset to each eye, simulating the natural disparity an observer would experience. It is an especially powerful depth cue for near to mid-range distances. <ref name="HITLCues1"/>


===[[Convergence]] (Vergence)===
===[[Convergence]] (Vergence)===
Line 25: Line 25:
===Physiological Monocular Cues===
===Physiological Monocular Cues===
====[[Accommodation]]====
====[[Accommodation]]====
This refers to the automatic adjustment of the eye's [[lens (anatomy)|lens]] focus to maintain a clear image (retinal focus) of an object as its distance changes. The [[ciliary muscle]] controls the lens shape; the muscular tension or effort involved provides the brain with a cue to the object's distance. <ref name="CuttingVishton1995">Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), *Handbook of perception and cognition: Vol. 5. Perception of space and motion* (pp. 69–117). Academic Press.</ref> <ref name="FisherAccommodation1988">Fisher, Scott K., and Kenneth J. Ciuffreda. (1988). Accommodation and apparent distance. *Perception, 17*(5), 609-621.</ref> This cue is primarily effective for objects within approximately 2 meters and is relatively weak compared to other cues, often working in conjunction with them. <ref name="HITLCues2">(2014-06-20) Accommodation and Convergence - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.a.AccommodationConvergence.html</ref> <ref name="HITLCues1"/>
This refers to the automatic adjustment of the eye's [[lens (anatomy)|lens]] focus to maintain a clear image (retinal focus) of an object as its distance changes. The [[ciliary muscle]] controls the lens shape; the muscular tension or effort involved provides the brain with a cue to the object's distance. <ref name="CuttingVishton1995">Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), *Handbook of perception and cognition: Vol. 5. Perception of space and motion* (pp. 69-117). Academic Press.</ref> <ref name="FisherAccommodation1988">Fisher, Scott K., and Kenneth J. Ciuffreda. (1988). Accommodation and apparent distance. *Perception, 17*(5), 609-621.</ref> This cue is primarily effective for objects within approximately 2 meters and is relatively weak compared to other cues, often working in conjunction with them. <ref name="HITLCues2">(2014-06-20) Accommodation and Convergence - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.a.AccommodationConvergence.html</ref> <ref name="HITLCues1"/>


===Pictorial (Static) Monocular Cues===
===Pictorial (Static) Monocular Cues===
Line 61: Line 61:


====[[Motion Parallax]]====
====[[Motion Parallax]]====
As an observer moves their head or body, objects at different distances move at different apparent speeds across the visual field. Closer objects appear to move faster and in the opposite direction relative to the observer's movement compared to more distant objects, which appear to move slower and potentially in the same direction. <ref name="Gibson1950"/> <ref name="RogersMotionParallax1979">Rogers, Brian, and Maureen Graham. (1979). Motion parallax as an independent cue for depth perception. *Perception, 8*(2), 125-134.</ref> For example, when looking out the side window of a moving car, nearby posts zip by while distant trees move slowly. This is a powerful depth cue, effectively utilized in VR/AR systems through [[head tracking]]. <ref name="HITLCues1"/> <ref name="ScienceLearnParallax">Depth perception. Science Learning Hub Pokapū Akoranga Pūtaiao. Retrieved April 25, 2025, from https://www.sciencelearn.org.nz/resources/107-depth-perception</ref>
As an observer moves their head or body, objects at different distances move at different apparent speeds across the visual field. Closer objects appear to move faster and in the opposite direction relative to the observer's movement compared to more distant objects, which appear to move slower and potentially in the same direction. <ref name="Gibson1950"/> <ref name="RogersMotionParallax1979">Rogers, Brian, and Maureen Graham. (1979). Motion parallax as an independent cue for depth perception. *Perception, 8*(2), 125-134.</ref> For example, when looking out the side window of a moving car, nearby posts zip by while distant trees move slowly. This is a powerful depth cue, effectively utilized in VR/AR systems through [[head tracking]]. <ref name="HITLCues1"/> <ref name="ScienceLearnParallax">Depth perception. Science Learning Hub - Pokapū Akoranga Pūtaiao. Retrieved April 25, 2025, from https://www.sciencelearn.org.nz/resources/107-depth-perception</ref>


====[[Kinetic Depth Effect]]====
====[[Kinetic Depth Effect]]====
When a rigid, unfamiliar object rotates, the resulting changes in its two-dimensional projection onto the retina provide information about its three-dimensional structure. <ref name="WallachOConnell1953">Wallach, H., & O'Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), 205–217. doi:10.1037/h0058000</ref>
When a rigid, unfamiliar object rotates, the resulting changes in its two-dimensional projection onto the retina provide information about its three-dimensional structure. <ref name="WallachOConnell1953">Wallach, H., & O'Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), 205-217. doi:10.1037/h0058000</ref>


====[[Ocular Parallax]]====
====[[Ocular Parallax]]====
Line 82: Line 82:


====The [[Vergence-Accommodation Conflict]] (VAC)====
====The [[Vergence-Accommodation Conflict]] (VAC)====
A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. <ref name="ARInsiderVAC">(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/</ref> <ref name="WikiVAC">Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict</ref> <ref name="DeliverContactsFocus">(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/blog/exploring-the-focal-distance-in-vr-headsets</ref> While the user's eyes converge appropriately for the virtual object's simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the '''[[vergence-accommodation conflict]]''' (VAC). <ref name="HoffmanVAC2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33</ref> <ref name="FacebookVAC2019">Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]</ref> <ref name="KramidaVAC2016">Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.</ref>
A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. <ref name="ARInsiderVAC">(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/</ref> <ref name="WikiVAC">Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict</ref> <ref name="DeliverContactsFocus">(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/blog/exploring-the-focal-distance-in-vr-headsets</ref> While the user's eyes converge appropriately for the virtual object's simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the '''[[vergence-accommodation conflict]]''' (VAC). <ref name="HoffmanVAC2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33</ref> <ref name="FacebookVAC2019">Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]</ref> <ref name="KramidaVAC2016">Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.</ref>


The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:
The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:
Line 105: Line 105:
*'''[[Multifocal Displays]] (Multi-Plane Displays)''': Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. <ref name="AkeleyMultifocal2004">Akeley, Kurt, Watt, S. J., Girshick, A. R., & Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.</ref> The visual system can then accommodate to the plane closest to the target object's depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. <ref name="ARInsiderVAC"/> While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.
*'''[[Multifocal Displays]] (Multi-Plane Displays)''': Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. <ref name="AkeleyMultifocal2004">Akeley, Kurt, Watt, S. J., Girshick, A. R., & Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.</ref> The visual system can then accommodate to the plane closest to the target object's depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. <ref name="ARInsiderVAC"/> While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.


*'''[[Light Field Displays]]''': These displays aim to reconstruct the [[light field]] of a scene the distribution of light rays in space more completely. By emitting rays with the correct origin and direction, they allow the viewer's eye to naturally focus at different depths within the virtual scene, as if viewing a real 3D environment. <ref name="WetzsteinLightField2011">Wetzstein, Gordon, et al. (2011). Computational plenoptic imaging. *Computer Graphics Forum, 30*(8), 2397-2426.</ref> <ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. *ACM Transactions on Graphics (TOG), 32*(6), 1-10. doi:10.1145/2508363.2508366</ref> This can potentially solve the VAC without requiring eye tracking. However, generating the necessary dense light fields poses significant computational and hardware challenges, often involving trade-offs between resolution, field of view, and form factor. <ref name="ARInsiderVAC"/> Companies like CREAL are developing light field modules for AR/VR. <ref name="WikiVAC"/>
*'''[[Light Field Displays]]''': These displays aim to reconstruct the [[light field]] of a scene, the distribution of light rays in space, more completely. By emitting rays with the correct origin and direction, they allow the viewer's eye to naturally focus at different depths within the virtual scene, as if viewing a real 3D environment. <ref name="WetzsteinLightField2011">Wetzstein, Gordon, et al. (2011). Computational plenoptic imaging. *Computer Graphics Forum, 30*(8), 2397-2426.</ref> <ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. *ACM Transactions on Graphics (TOG), 32*(6), 1-10. doi:10.1145/2508363.2508366</ref> This can potentially solve the VAC without requiring eye tracking. However, generating the necessary dense light fields poses significant computational and hardware challenges, often involving trade-offs between resolution, field of view, and form factor. <ref name="ARInsiderVAC"/> Companies like CREAL are developing light field modules for AR/VR. <ref name="WikiVAC"/>


*  '''[[Holographic Displays]]''': True [[holography|holographic]] displays aim to reconstruct the wavefront of light from the virtual scene using diffraction, which would inherently provide all depth cues, including accommodation, correctly and continuously. <ref name="MaimoneHolo2017">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. *ACM Transactions on Graphics (TOG), 36*(4), 1-16. doi:10.1145/3072959.3073610</ref> This is often considered an ultimate goal for visual displays. However, current implementations suitable for near-eye displays face major challenges in computational load, achievable [[field of view]], image quality (for example [[speckle noise]]), and component size. <ref name="MaimoneHolo2017"/> <ref name="ARInsiderVAC"/>
*  '''[[Holographic Displays]]''': True [[holography|holographic]] displays aim to reconstruct the wavefront of light from the virtual scene using diffraction, which would inherently provide all depth cues, including accommodation, correctly and continuously. <ref name="MaimoneHolo2017">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. *ACM Transactions on Graphics (TOG), 36*(4), 1-16. doi:10.1145/3072959.3073610</ref> This is often considered an ultimate goal for visual displays. However, current implementations suitable for near-eye displays face major challenges in computational load, achievable [[field of view]], image quality (for example [[speckle noise]]), and component size. <ref name="MaimoneHolo2017"/> <ref name="ARInsiderVAC"/>
Line 153: Line 153:
<ref name="HowardRogers1995">Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.</ref>
<ref name="HowardRogers1995">Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.</ref>
<ref name="HITLCues1">(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.b.VisualDepthCues.html</ref>
<ref name="HITLCues1">(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.b.VisualDepthCues.html</ref>
<ref name="BlakeWilson2011">Blake, R., & Wilson, H. R. (2011). Binocular vision. *Vision Research, 51*(7), 754–770. doi:10.1016/j.visres.2010.10.009</ref>
<ref name="BlakeWilson2011">Blake, R., & Wilson, H. R. (2011). Binocular vision. *Vision Research, 51*(7), 754-770. doi:10.1016/j.visres.2010.10.009</ref>
<ref name="ParkerStereo2007">Parker, Andrew J. (2007). Binocular depth perception and the cerebral cortex. *Nature Reviews Neuroscience, 8*(5), 379-391.</ref>
<ref name="ParkerStereo2007">Parker, Andrew J. (2007). Binocular depth perception and the cerebral cortex. *Nature Reviews Neuroscience, 8*(5), 379-391.</ref>
<ref name="WattFocusCues2005">Watt, Simon J., Auld, W. S., & Binnie, R. G. (2005). Focus cues affect perceived depth. *Journal of vision, 5*(10), 834-862.</ref>
<ref name="WattFocusCues2005">Watt, Simon J., Auld, W. S., & Binnie, R. G. (2005). Focus cues affect perceived depth. *Journal of vision, 5*(10), 834-862.</ref>
<ref name="CuttingVishton1995">Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), *Handbook of perception and cognition: Vol. 5. Perception of space and motion* (pp. 69–117). Academic Press.</ref>
<ref name="CuttingVishton1995">Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), *Handbook of perception and cognition: Vol. 5. Perception of space and motion* (pp. 69-117). Academic Press.</ref>
<ref name="FisherAccommodation1988">Fisher, Scott K., and Kenneth J. Ciuffreda. (1988). Accommodation and apparent distance. *Perception, 17*(5), 609-621.</ref>
<ref name="FisherAccommodation1988">Fisher, Scott K., and Kenneth J. Ciuffreda. (1988). Accommodation and apparent distance. *Perception, 17*(5), 609-621.</ref>
<ref name="HITLCues2">(2014-06-20) Accommodation and Convergence - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.a.AccommodationConvergence.html</ref>
<ref name="HITLCues2">(2014-06-20) Accommodation and Convergence - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.a.AccommodationConvergence.html</ref>
Line 167: Line 167:
<ref name="FryFog1976">Fry, Glenn A., Kerr, K. E., Trezona, P. W., & Westerberg, C. F. (1976). The effect of fog on the perception of distance. *Human Factors, 18*(4), 342-346.</ref>
<ref name="FryFog1976">Fry, Glenn A., Kerr, K. E., Trezona, P. W., & Westerberg, C. F. (1976). The effect of fog on the perception of distance. *Human Factors, 18*(4), 342-346.</ref>
<ref name="RogersMotionParallax1979">Rogers, Brian, and Maureen Graham. (1979). Motion parallax as an independent cue for depth perception. *Perception, 8*(2), 125-134.</ref>
<ref name="RogersMotionParallax1979">Rogers, Brian, and Maureen Graham. (1979). Motion parallax as an independent cue for depth perception. *Perception, 8*(2), 125-134.</ref>
<ref name="ScienceLearnParallax">Depth perception. Science Learning Hub Pokapū Akoranga Pūtaiao. Retrieved April 25, 2025, from https://www.sciencelearn.org.nz/resources/107-depth-perception</ref>
<ref name="ScienceLearnParallax">Depth perception. Science Learning Hub - Pokapū Akoranga Pūtaiao. Retrieved April 25, 2025, from https://www.sciencelearn.org.nz/resources/107-depth-perception</ref>
<ref name="WallachOConnell1953">Wallach, H., & O'Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), 205–217. doi:10.1037/h0058000</ref>
<ref name="WallachOConnell1953">Wallach, H., & O'Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), 205-217. doi:10.1037/h0058000</ref>
<ref name="KudoOcularParallax1988">Kudo, Hiromi, and Hirohiko Ono. (1988). Depth perception, ocular parallax, and stereopsis. *Perception, 17*(4), 473-480.</ref>
<ref name="KudoOcularParallax1988">Kudo, Hiromi, and Hirohiko Ono. (1988). Depth perception, ocular parallax, and stereopsis. *Perception, 17*(4), 473-480.</ref>
<ref name="ARInsiderVAC">(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/</ref>
<ref name="ARInsiderVAC">(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/</ref>
<ref name="WikiVAC">Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict</ref>
<ref name="WikiVAC">Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict</ref>
<ref name="DeliverContactsFocus">(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/blog/exploring-the-focal-distance-in-vr-headsets</ref>
<ref name="DeliverContactsFocus">(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/blog/exploring-the-focal-distance-in-vr-headsets</ref>
<ref name="HoffmanVAC2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33</ref>
<ref name="HoffmanVAC2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33</ref>
<ref name="FacebookVAC2019">Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]</ref>
<ref name="FacebookVAC2019">Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]</ref>
<ref name="KramidaVAC2016">Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.</ref>
<ref name="KramidaVAC2016">Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.</ref>