Depth cue: Difference between revisions
Appearance
Xinreality (talk | contribs) |
Xinreality (talk | contribs) Undo revision 34813 by Xinreality (talk) Tag: Undo |
||
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{see also|Terms|Technical Terms}} | {{see also|Terms|Technical Terms}} | ||
[[Depth cue]] is any of a variety of perceptual signals that allow the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. <ref name="HowardRogers2012">Howard, I. P., & Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.</ref> These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. <ref name="HowardRogers1995">Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.</ref> The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. <ref name="HITLCues1">(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/ | [[Depth cue]] is any of a variety of perceptual signals that allow the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. <ref name="HowardRogers2012">Howard, I. P., & Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.</ref> These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. <ref name="HowardRogers1995">Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.</ref> The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. <ref name="HITLCues1">(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge_base/virtual-worlds/EVE/III.A.1.c.DepthCues.html</ref> | ||
== Classification of Depth Cues == | == Classification of Depth Cues == | ||
Line 15: | Line 15: | ||
===[[Binocular Disparity]] (Stereopsis)=== | ===[[Binocular Disparity]] (Stereopsis)=== | ||
Because the two eyes are horizontally separated (by the [[interpupillary distance]], or IPD, typically around 6-7 cm), they receive slightly different images of the world. This difference in the image location of an object seen by the left and right eyes is called '''binocular disparity'''. The brain's visual cortex processes this disparity to generate the perception of depth, a phenomenon known as '''[[stereopsis]]'''. <ref name="BlakeWilson2011">Blake, R., & Wilson, H. R. (2011). Binocular vision. *Vision Research, 51*(7), | Because the two eyes are horizontally separated (by the [[interpupillary distance]], or IPD, typically around 6-7 cm), they receive slightly different images of the world. This difference in the image location of an object seen by the left and right eyes is called '''binocular disparity'''. The brain's visual cortex processes this disparity to generate the perception of depth, a phenomenon known as '''[[stereopsis]]'''. <ref name="BlakeWilson2011">Blake, R., & Wilson, H. R. (2011). Binocular vision. *Vision Research, 51*(7), 754-770. doi:10.1016/j.visres.2010.10.009</ref> <ref name="ParkerStereo2007">Parker, Andrew J. (2007). Binocular depth perception and the cerebral cortex. *Nature Reviews Neuroscience, 8*(5), 379-391.</ref> VR headsets exploit this by presenting a separate image with the correct perspective offset to each eye, simulating the natural disparity an observer would experience. It is an especially powerful depth cue for near to mid-range distances. <ref name="HITLCues1"/> | ||
===[[Convergence]] (Vergence)=== | ===[[Convergence]] (Vergence)=== | ||
Line 25: | Line 25: | ||
===Physiological Monocular Cues=== | ===Physiological Monocular Cues=== | ||
====[[Accommodation]]==== | ====[[Accommodation]]==== | ||
This refers to the automatic adjustment of the eye's [[lens (anatomy)|lens]] focus to maintain a clear image (retinal focus) of an object as its distance changes. The [[ciliary muscle]] controls the lens shape; the muscular tension or effort involved provides the brain with a cue to the object's distance. <ref name="CuttingVishton1995">Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), *Handbook of perception and cognition: Vol. 5. Perception of space and motion* (pp. | This refers to the automatic adjustment of the eye's [[lens (anatomy)|lens]] focus to maintain a clear image (retinal focus) of an object as its distance changes. The [[ciliary muscle]] controls the lens shape; the muscular tension or effort involved provides the brain with a cue to the object's distance. <ref name="CuttingVishton1995">Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), *Handbook of perception and cognition: Vol. 5. Perception of space and motion* (pp. 69-117). Academic Press.</ref> <ref name="FisherAccommodation1988">Fisher, Scott K., and Kenneth J. Ciuffreda. (1988). Accommodation and apparent distance. *Perception, 17*(5), 609-621.</ref> This cue is primarily effective for objects within approximately 2 meters and is relatively weak compared to other cues, often working in conjunction with them. <ref name="HITLCues2">(2014-06-20) Accommodation and Convergence - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.a.AccommodationConvergence.html</ref> <ref name="HITLCues1"/> | ||
===Pictorial (Static) Monocular Cues=== | ===Pictorial (Static) Monocular Cues=== | ||
Line 40: | Line 40: | ||
====[[Relative Height]] (Elevation in the Visual Field)==== | ====[[Relative Height]] (Elevation in the Visual Field)==== | ||
For objects resting on the same ground plane, those that are higher in the visual field (closer to the horizon line) are typically perceived as being farther away. For objects above the horizon line ( | For objects resting on the same ground plane, those that are higher in the visual field (closer to the horizon line) are typically perceived as being farther away. For objects above the horizon line (for example clouds), those lower in the visual field are perceived as farther. <ref name="CuttingVishton1995"/> <ref name="OoiHeight2001">Ooi, Teng Leng, Bing Wu, and Zijiang J. He. (2001). Distance determined by the angular declination below the horizon. *Nature, 414*(6860), 197-200.</ref> | ||
====[[Linear Perspective]]==== | ====[[Linear Perspective]]==== | ||
Line 61: | Line 61: | ||
====[[Motion Parallax]]==== | ====[[Motion Parallax]]==== | ||
As an observer moves their head or body, objects at different distances move at different apparent speeds across the visual field. Closer objects appear to move faster and in the opposite direction relative to the observer's movement compared to more distant objects, which appear to move slower and potentially in the same direction. <ref name="Gibson1950"/> <ref name="RogersMotionParallax1979">Rogers, Brian, and Maureen Graham. (1979). Motion parallax as an independent cue for depth perception. *Perception, 8*(2), 125-134.</ref> For example, when looking out the side window of a moving car, nearby posts zip by while distant trees move slowly. This is a powerful depth cue, effectively utilized in VR/AR systems through [[head tracking]]. <ref name="HITLCues1"/> <ref name="ScienceLearnParallax">Depth perception. Science Learning Hub | As an observer moves their head or body, objects at different distances move at different apparent speeds across the visual field. Closer objects appear to move faster and in the opposite direction relative to the observer's movement compared to more distant objects, which appear to move slower and potentially in the same direction. <ref name="Gibson1950"/> <ref name="RogersMotionParallax1979">Rogers, Brian, and Maureen Graham. (1979). Motion parallax as an independent cue for depth perception. *Perception, 8*(2), 125-134.</ref> For example, when looking out the side window of a moving car, nearby posts zip by while distant trees move slowly. This is a powerful depth cue, effectively utilized in VR/AR systems through [[head tracking]]. <ref name="HITLCues1"/> <ref name="ScienceLearnParallax">Depth perception. Science Learning Hub - Pokapū Akoranga Pūtaiao. Retrieved April 25, 2025, from https://www.sciencelearn.org.nz/resources/107-depth-perception</ref> | ||
====[[Kinetic Depth Effect]]==== | ====[[Kinetic Depth Effect]]==== | ||
When a rigid, unfamiliar object rotates, the resulting changes in its two-dimensional projection onto the retina provide information about its three-dimensional structure. <ref name="WallachOConnell1953">Wallach, H., & O'Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), | When a rigid, unfamiliar object rotates, the resulting changes in its two-dimensional projection onto the retina provide information about its three-dimensional structure. <ref name="WallachOConnell1953">Wallach, H., & O'Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), 205-217. doi:10.1037/h0058000</ref> | ||
====[[Ocular Parallax]]==== | ====[[Ocular Parallax]]==== | ||
Line 82: | Line 82: | ||
====The [[Vergence-Accommodation Conflict]] (VAC)==== | ====The [[Vergence-Accommodation Conflict]] (VAC)==== | ||
A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. <ref name="ARInsiderVAC">(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/ | A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. <ref name="ARInsiderVAC">(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2022/06/22/5-ways-to-address-ars-vergence-accommodation-conflict/</ref> <ref name="WikiVAC">Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict</ref> <ref name="DeliverContactsFocus">(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/research/virtual-reality-the-vergence-accommodation-conflict/</ref> While the user's eyes converge appropriately for the virtual object's simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the '''[[vergence-accommodation conflict]]''' (VAC). <ref name="HoffmanVAC2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33</ref> <ref name="FacebookVAC2019">Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]</ref> <ref name="KramidaVAC2016">Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.</ref> | ||
The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues: | The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues: | ||
Line 91: | Line 91: | ||
*Reduced realism and immersion | *Reduced realism and immersion | ||
The VAC is particularly problematic for interactions requiring sustained focus or high visual fidelity at close distances ( | The VAC is particularly problematic for interactions requiring sustained focus or high visual fidelity at close distances (for example virtual surgery simulation, detailed object inspection, reading text on near virtual objects). <ref name="HowardRogers2012"/> | ||
====Other Limitations==== | ====Other Limitations==== | ||
Line 101: | Line 101: | ||
To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies: | To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies: | ||
*'''[[Varifocal Displays]]''': These displays dynamically adjust the focal distance of the display optics ( | *'''[[Varifocal Displays]]''': These displays dynamically adjust the focal distance of the display optics (for example using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. <ref name="KonradVAC2016">Konrad, R., Cooper, E. A., & Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800 https://www.computationalimaging.org/publications/accommodation-invariant-near-eye-displays-siggraph-2017/</ref> <ref name="DunnVarifocal2017">Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.</ref> This typically requires fast and accurate [[eye tracking]] to determine the user's point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. <ref name="ARInsiderVAC"/> Prototypes like Meta Reality Labs' "Half Dome" series have demonstrated this approach. <ref name="ARInsiderVAC"/> | ||
*'''[[Multifocal Displays]] (Multi-Plane Displays)''': Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. <ref name="AkeleyMultifocal2004">Akeley, Kurt, Watt, S. J., Girshick, A. R., & Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.</ref> The visual system can then accommodate to the plane closest to the target object's depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. <ref name="ARInsiderVAC"/> While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes. | *'''[[Multifocal Displays]] (Multi-Plane Displays)''': Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. <ref name="AkeleyMultifocal2004">Akeley, Kurt, Watt, S. J., Girshick, A. R., & Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.</ref> The visual system can then accommodate to the plane closest to the target object's depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. <ref name="ARInsiderVAC"/> While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes. | ||
*'''[[Light Field Displays]]''': These displays aim to reconstruct the [[light field]] of a scene | *'''[[Light Field Displays]]''': These displays aim to reconstruct the [[light field]] of a scene, the distribution of light rays in space, more completely. By emitting rays with the correct origin and direction, they allow the viewer's eye to naturally focus at different depths within the virtual scene, as if viewing a real 3D environment. <ref name="WetzsteinLightField2011">Wetzstein, Gordon, et al. (2011). Computational plenoptic imaging. *Computer Graphics Forum, 30*(8), 2397-2426.</ref> <ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. *ACM Transactions on Graphics (TOG), 32*(6), 1-10. doi:10.1145/2508363.2508366</ref> This can potentially solve the VAC without requiring eye tracking. However, generating the necessary dense light fields poses significant computational and hardware challenges, often involving trade-offs between resolution, field of view, and form factor. <ref name="ARInsiderVAC"/> Companies like CREAL are developing light field modules for AR/VR. <ref name="WikiVAC"/> | ||
* '''[[Holographic Displays]]''': True [[holography|holographic]] displays aim to reconstruct the wavefront of light from the virtual scene using diffraction, which would inherently provide all depth cues, including accommodation, correctly and continuously. <ref name="MaimoneHolo2017">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. *ACM Transactions on Graphics (TOG), 36*(4), 1-16. doi:10.1145/3072959.3073610</ref> This is often considered an ultimate goal for visual displays. However, current implementations suitable for near-eye displays face major challenges in computational load, achievable [[field of view]], image quality ( | * '''[[Holographic Displays]]''': True [[holography|holographic]] displays aim to reconstruct the wavefront of light from the virtual scene using diffraction, which would inherently provide all depth cues, including accommodation, correctly and continuously. <ref name="MaimoneHolo2017">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. *ACM Transactions on Graphics (TOG), 36*(4), 1-16. doi:10.1145/3072959.3073610</ref> This is often considered an ultimate goal for visual displays. However, current implementations suitable for near-eye displays face major challenges in computational load, achievable [[field of view]], image quality (for example [[speckle noise]]), and component size. <ref name="MaimoneHolo2017"/> <ref name="ARInsiderVAC"/> | ||
*'''[[Retinal Projection]] (Retinal Scan Displays)''': These systems bypass intermediate screens and project images directly onto the viewer's retina, often using low-power lasers or micro-LED arrays. <ref name="ARInsiderVAC"/> Because the image is formed on the retina, it can appear in focus regardless of the eye's accommodation state, potentially eliminating VAC. This approach could enable very compact form factors. Challenges include achieving a sufficiently large [[eye-box]] (the area where the eye can see the image), potential sensitivity to eye floaters or optical path debris, and safety considerations. <ref name="ARInsiderVAC"/> Examples include the discontinued North Focals smart glasses. | *'''[[Retinal Projection]] (Retinal Scan Displays)''': These systems bypass intermediate screens and project images directly onto the viewer's retina, often using low-power lasers or micro-LED arrays. <ref name="ARInsiderVAC"/> Because the image is formed on the retina, it can appear in focus regardless of the eye's accommodation state, potentially eliminating VAC. This approach could enable very compact form factors. Challenges include achieving a sufficiently large [[eye-box]] (the area where the eye can see the image), potential sensitivity to eye floaters or optical path debris, and safety considerations. <ref name="ARInsiderVAC"/> Examples include the discontinued North Focals smart glasses. | ||
Line 127: | Line 127: | ||
*'''Visual Fatigue and Discomfort:''' The [[vergence-accommodation conflict]] is a primary contributor to eye strain, headaches, blurred vision, and general visual discomfort, especially during prolonged use. <ref name="HoffmanVAC2008"/> <ref name="ARInsiderVAC"/> | *'''Visual Fatigue and Discomfort:''' The [[vergence-accommodation conflict]] is a primary contributor to eye strain, headaches, blurred vision, and general visual discomfort, especially during prolonged use. <ref name="HoffmanVAC2008"/> <ref name="ARInsiderVAC"/> | ||
*'''Spatial Perception Errors:''' Inaccurate or conflicting depth cues can lead to misjudgments of distance, size, and the spatial relationships between objects, potentially affecting user performance in tasks requiring precise spatial awareness or interaction. <ref name="JonesVAC2008"/> <ref name="WillemsenHMD2009">Willemsen, Peter, Colton, M. B., Creem-Regehr, S. H., & Thompson, W. B. (2009). The effects of head-mounted display mechanical properties and field of view on distance judgments in virtual environments. ''ACM Transactions on Applied Perception (TAP), 6''(2), 1-14.</ref> | *'''Spatial Perception Errors:''' Inaccurate or conflicting depth cues can lead to misjudgments of distance, size, and the spatial relationships between objects, potentially affecting user performance in tasks requiring precise spatial awareness or interaction. <ref name="JonesVAC2008"/> <ref name="WillemsenHMD2009">Willemsen, Peter, Colton, M. B., Creem-Regehr, S. H., & Thompson, W. B. (2009). The effects of head-mounted display mechanical properties and field of view on distance judgments in virtual environments. ''ACM Transactions on Applied Perception (TAP), 6''(2), 1-14.</ref> | ||
*'''[[Simulator Sickness]]:''' Inconsistencies between visual depth cues and other sensory information ( | *'''[[Simulator Sickness]]:''' Inconsistencies between visual depth cues and other sensory information (for example vestibular signals from the inner ear) can contribute to symptoms like nausea, disorientation, and dizziness. <ref name="VosVAC2005"/> <ref name="WannAdaptation1995">Wann, John P., Simon Rushton, and Mark Mon-Williams. (1995). Natural problems for stereoscopic depth perception in virtual environments. *Vision research, 35*(19), 2731-2736.</ref> | ||
==Design Considerations for VR/AR Developers== | ==Design Considerations for VR/AR Developers== | ||
Line 144: | Line 144: | ||
*'''Perceptual Adaptation:''' Studying how users adapt to inconsistent or unnatural depth cues over time, potentially leading to training paradigms or design strategies that improve comfort on current hardware. <ref name="WannAdaptation1995"/> | *'''Perceptual Adaptation:''' Studying how users adapt to inconsistent or unnatural depth cues over time, potentially leading to training paradigms or design strategies that improve comfort on current hardware. <ref name="WannAdaptation1995"/> | ||
*'''Personalized Depth Rendering:''' Calibrating depth cue presentation based on individual user characteristics ( | *'''Personalized Depth Rendering:''' Calibrating depth cue presentation based on individual user characteristics (for example IPD, visual acuity, refractive error, sensitivity to VAC) for optimized comfort and performance. <ref name="WillemsenHMD2009"/> | ||
*'''[[Cross-modal interaction|Cross-Modal Integration]]:** Investigating how integrating depth information from other senses ( | *'''[[Cross-modal interaction|Cross-Modal Integration]]:** Investigating how integrating depth information from other senses (for example [[spatial audio]], [[haptic feedback]]) can enhance or reinforce visual depth perception. <ref name="ErnstCrossModal2002">Ernst, Marc O., and Martin S. Banks. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. ''Nature, 415''(6870), 429-433.</ref> | ||
*'''[[Neural rendering|Neural Rendering]] and AI:''' Utilizing machine learning techniques ( | *'''[[Neural rendering|Neural Rendering]] and AI:''' Utilizing machine learning techniques (for example [[Neural Radiance Fields]] (NeRF)) to potentially render complex scenes with perceptually accurate depth cues more efficiently by learning implicit scene representations. <ref name="MildenhallNeRF2020">Mildenhall, Ben, et al. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. *European conference on computer vision*. Springer, Cham.</ref> | ||
==References== | ==References== | ||
Line 153: | Line 153: | ||
<ref name="HowardRogers1995">Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.</ref> | <ref name="HowardRogers1995">Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.</ref> | ||
<ref name="HITLCues1">(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.b.VisualDepthCues.html</ref> | <ref name="HITLCues1">(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.b.VisualDepthCues.html</ref> | ||
<ref name="BlakeWilson2011">Blake, R., & Wilson, H. R. (2011). Binocular vision. *Vision Research, 51*(7), | <ref name="BlakeWilson2011">Blake, R., & Wilson, H. R. (2011). Binocular vision. *Vision Research, 51*(7), 754-770. doi:10.1016/j.visres.2010.10.009</ref> | ||
<ref name="ParkerStereo2007">Parker, Andrew J. (2007). Binocular depth perception and the cerebral cortex. *Nature Reviews Neuroscience, 8*(5), 379-391.</ref> | <ref name="ParkerStereo2007">Parker, Andrew J. (2007). Binocular depth perception and the cerebral cortex. *Nature Reviews Neuroscience, 8*(5), 379-391.</ref> | ||
<ref name="WattFocusCues2005">Watt, Simon J., Auld, W. S., & Binnie, R. G. (2005). Focus cues affect perceived depth. *Journal of vision, 5*(10), 834-862.</ref> | <ref name="WattFocusCues2005">Watt, Simon J., Auld, W. S., & Binnie, R. G. (2005). Focus cues affect perceived depth. *Journal of vision, 5*(10), 834-862.</ref> | ||
<ref name="CuttingVishton1995">Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), *Handbook of perception and cognition: Vol. 5. Perception of space and motion* (pp. | <ref name="CuttingVishton1995">Cutting, J. E., & Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth. In W. Epstein & S. Rogers (Eds.), *Handbook of perception and cognition: Vol. 5. Perception of space and motion* (pp. 69-117). Academic Press.</ref> | ||
<ref name="FisherAccommodation1988">Fisher, Scott K., and Kenneth J. Ciuffreda. (1988). Accommodation and apparent distance. *Perception, 17*(5), 609-621.</ref> | <ref name="FisherAccommodation1988">Fisher, Scott K., and Kenneth J. Ciuffreda. (1988). Accommodation and apparent distance. *Perception, 17*(5), 609-621.</ref> | ||
<ref name="HITLCues2">(2014-06-20) Accommodation and Convergence - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.a.AccommodationConvergence.html</ref> | <ref name="HITLCues2">(2014-06-20) Accommodation and Convergence - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.a.AccommodationConvergence.html</ref> | ||
Line 167: | Line 167: | ||
<ref name="FryFog1976">Fry, Glenn A., Kerr, K. E., Trezona, P. W., & Westerberg, C. F. (1976). The effect of fog on the perception of distance. *Human Factors, 18*(4), 342-346.</ref> | <ref name="FryFog1976">Fry, Glenn A., Kerr, K. E., Trezona, P. W., & Westerberg, C. F. (1976). The effect of fog on the perception of distance. *Human Factors, 18*(4), 342-346.</ref> | ||
<ref name="RogersMotionParallax1979">Rogers, Brian, and Maureen Graham. (1979). Motion parallax as an independent cue for depth perception. *Perception, 8*(2), 125-134.</ref> | <ref name="RogersMotionParallax1979">Rogers, Brian, and Maureen Graham. (1979). Motion parallax as an independent cue for depth perception. *Perception, 8*(2), 125-134.</ref> | ||
<ref name="ScienceLearnParallax">Depth perception. Science Learning Hub | <ref name="ScienceLearnParallax">Depth perception. Science Learning Hub - Pokapū Akoranga Pūtaiao. Retrieved April 25, 2025, from https://www.sciencelearn.org.nz/resources/107-depth-perception</ref> | ||
<ref name="WallachOConnell1953">Wallach, H., & O'Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), | <ref name="WallachOConnell1953">Wallach, H., & O'Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), 205-217. doi:10.1037/h0058000</ref> | ||
<ref name="KudoOcularParallax1988">Kudo, Hiromi, and Hirohiko Ono. (1988). Depth perception, ocular parallax, and stereopsis. *Perception, 17*(4), 473-480.</ref> | <ref name="KudoOcularParallax1988">Kudo, Hiromi, and Hirohiko Ono. (1988). Depth perception, ocular parallax, and stereopsis. *Perception, 17*(4), 473-480.</ref> | ||
<ref name="ARInsiderVAC">(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/</ref> | <ref name="ARInsiderVAC">(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/</ref> | ||
<ref name="WikiVAC">Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict</ref> | <ref name="WikiVAC">Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict</ref> | ||
<ref name="DeliverContactsFocus">(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/blog/exploring-the-focal-distance-in-vr-headsets</ref> | <ref name="DeliverContactsFocus">(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/blog/exploring-the-focal-distance-in-vr-headsets</ref> | ||
<ref name="HoffmanVAC2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). | <ref name="HoffmanVAC2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33</ref> | ||
<ref name="FacebookVAC2019">Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]</ref> | <ref name="FacebookVAC2019">Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]</ref> | ||
<ref name="KramidaVAC2016">Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.</ref> | <ref name="KramidaVAC2016">Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.</ref> |