Near-eye lightfield display: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
= Near-eye lightfield display = | = Near-eye lightfield display = | ||
A '''Near-eye lightfield display''' (NELFD) is a type of [[Near-eye display]] (NED), often implemented in a [[Head-mounted display]] (HMD), designed to reproduce a [[lightfield]]—the complete set of light rays filling a region of space—rather than just a single flat [[image]] for the viewer. By emitting light rays with potentially correct spatial *and* angular distribution, a NELFD allows the viewer’s [[eye]]s to engage natural [[Vergence|vergence]] *and* [[Accommodation (visual)|accommodation]] (focusing) responses simultaneously. This capability aims to resolve the [[vergence-accommodation conflict]] (VAC), a common source of visual discomfort and fatigue in conventional [[stereoscopic]] displays used in [[virtual reality]] (VR) and [[augmented reality]] (AR), leading to potentially sharper, more comfortable, and more realistic three-dimensional vision. | A '''Near-eye lightfield display''' (NELFD) is a type of [[Near-eye display]] (NED), often implemented in a [[Head-mounted display]] (HMD), designed to reproduce a [[lightfield]]—the complete set of light rays filling a region of space—rather than just a single flat [[image]] for the viewer. By emitting light rays with potentially correct spatial *and* angular distribution, a NELFD allows the viewer’s [[eye]]s to engage natural [[Vergence|vergence]] *and* [[Accommodation (visual)|accommodation]] (focusing) responses simultaneously. This capability aims to resolve the [[vergence-accommodation conflict]] (VAC), a common source of visual discomfort and fatigue in conventional [[stereoscopic]] displays used in [[virtual reality]] (VR) and [[augmented reality]] (AR)<ref name="Hoffman2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. ''Journal of Vision'', 8(3), 33. doi:10.1167/8.3.33</ref>, leading to potentially sharper, more comfortable, and more realistic three-dimensional vision. | ||
== Principle of Operation == | == Principle of Operation == | ||
Line 8: | Line 8: | ||
Common technical approaches to generating the lightfield include: | Common technical approaches to generating the lightfield include: | ||
* '''[[Microlens Array]] (MLA) based:''' An array of tiny lenses is placed over a high-resolution [[display panel]] (like an [[OLED]] or [[LCD]]). Each microlens samples a portion of the underlying pixels and projects them in specific directions, creating different views for different parts of the eye's pupil. This technique is related to [[integral imaging]] or [[plenoptic]] camera principles, but inherently trades [[spatial resolution]] for [[angular resolution]]. | * '''[[Microlens Array]] (MLA) based:''' An array of tiny lenses is placed over a high-resolution [[display panel]] (like an [[OLED]] or [[LCD]]). Each microlens samples a portion of the underlying pixels and projects them in specific directions, creating different views for different parts of the eye's pupil. This technique is related to [[integral imaging]] or [[plenoptic]] camera principles<ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ''ACM Transactions on Graphics (TOG)'', 32(4), Article 138. Presented at SIGGRAPH 2013.</ref>, but inherently trades [[spatial resolution]] for [[angular resolution]]. | ||
* '''Multi-layer Displays:''' Using multiple stacked, typically transparent, display layers (e.g., LCDs) that multiplicatively modulate light passing through them. By controlling the patterns on each layer, often using [[computational display]] techniques, the directional light distribution can be approximated, potentially offering more continuous focus cues. | * '''Multi-layer Displays:''' Using multiple stacked, typically transparent, display layers (e.g., LCDs) that multiplicatively modulate light passing through them. By controlling the patterns on each layer, often using [[computational display]] techniques, the directional light distribution can be approximated, potentially offering more continuous focus cues<ref name="Huang2015">Huang, F. C., Wetzstein, G., Barsky, B. A., & Heide, F. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ''ACM Transactions on Graphics (TOG)'', 34(4), Article 60. Presented at SIGGRAPH 2015.</ref>. | ||
* '''Varifocal / Multifocal Displays:''' Using optics whose focal length can be changed rapidly, such as [[Tunable lens|tunable lenses]] or mechanically actuated lenses/displays. These systems present images at different focal distances sequentially (time-multiplexed) or simultaneously. The visual system integrates these into a perception of depth, approximating a lightfield effect, particularly addressing accommodation. | * '''Varifocal / Multifocal Displays:''' Using optics whose focal length can be changed rapidly, such as [[Tunable lens|tunable lenses]] or mechanically actuated lenses/displays. These systems present images at different focal distances sequentially (time-multiplexed) or simultaneously. The visual system integrates these into a perception of depth, approximating a lightfield effect, particularly addressing accommodation<ref name="Akşit2019">Akşit, K., Lopes, W., Kim, J., Shirley, P., & Luebke, D. (2019). Manufacturing application-driven near-eye displays by combining 3D printing and thermoforming. ''ACM Transactions on Graphics (TOG)'', 38(6), Article 183. Presented at SIGGRAPH Asia 2019. (Discusses varifocal elements)</ref>. | ||
* '''Scanning / Projection:''' Using light sources like [[laser]]s combined with scanning [[mirror]]s (such as [[MEMS]]) or projection [[optics]] to directly synthesize the lightfield point-by-point or line-by-line towards the eye's pupil. | * '''Scanning / Projection:''' Using light sources like [[laser]]s combined with scanning [[mirror]]s (such as [[MEMS]]) or projection [[optics]] to directly synthesize the lightfield point-by-point or line-by-line towards the eye's pupil<ref name="Schowengerdt2015">Schowengerdt, B. T., & Seibel, E. J. (2015). True 3D scanned voxel displays using single or multiple light sources. US Patent 9,025,213 B2.</ref>. | ||
* '''[[Holographic display|Holographic]] Approaches:''' Using a [[Spatial light modulator]] (SLM), typically phase-only, to diffract light (usually from a laser) and reconstruct the [[wavefront]] of the desired 3D scene directly. This can potentially provide highly accurate focus cues but faces challenges like [[Speckle pattern|speckle]] and high computational requirements. [[Holographic optical element]]s (HOEs) or [[Metasurface]]s may also be used to manipulate light directionally, often in combination with a microdisplay. | * '''[[Holographic display|Holographic]] Approaches:''' Using a [[Spatial light modulator]] (SLM), typically phase-only, to diffract light (usually from a laser) and reconstruct the [[wavefront]] of the desired 3D scene directly. This can potentially provide highly accurate focus cues but faces challenges like [[Speckle pattern|speckle]] and high computational requirements<ref name="Maimone2017">Maimone, A., Lanman, D., Rathinavel, K., Keller, K., Luebke, D., & Fuchs, H. (2017). Holographic near-eye displays for virtual and augmented reality. ''ACM Transactions on Graphics (TOG)'', 36(4), Article 85. Presented at SIGGRAPH 2017.</ref>. [[Holographic optical element]]s (HOEs) or [[Metasurface]]s may also be used to manipulate light directionally, often in combination with a microdisplay. | ||
* '''Computational Approaches:''' Combining specialized optics with sophisticated [[rendering]] algorithms to generate the lightfield effect, sometimes using techniques like compressive light field displays or optimized light patterns projected onto diffusers or specialized optical elements. | * '''Computational Approaches:''' Combining specialized optics with sophisticated [[rendering]] algorithms to generate the lightfield effect, sometimes using techniques like compressive light field displays or optimized light patterns projected onto diffusers or specialized optical elements<ref name="Wetzstein2012">Wetzstein, G., Luebke, D., Heidrich, W. (2012). Hand-held Computational Light Field Photography and Display. ''IEEE Computer Graphics and Applications'', 32(1), 8-13.</ref>. | ||
== Advantages == | == Advantages == | ||
* '''Correct Focus Cues:''' The primary intended advantage. By reproducing directional light rays, NELFDs allow the eye's natural accommodation mechanism to work more naturally, mitigating or resolving the vergence-accommodation conflict (VAC). | * '''Correct Focus Cues:''' The primary intended advantage. By reproducing directional light rays, NELFDs allow the eye's natural accommodation mechanism to work more naturally, mitigating or resolving the vergence-accommodation conflict (VAC). | ||
* '''Improved [[Depth Perception]]:''' Providing multiple [[Depth cue|depth cues]] (binocular disparity, vergence, accommodation, [[defocus blur]]) leads to more realistic and accurate perception of 3D space. | * '''Improved [[Depth Perception]]:''' Providing multiple [[Depth cue|depth cues]] (binocular disparity, vergence, accommodation, [[defocus blur]]) leads to more realistic and accurate perception of 3D space. | ||
* '''Reduced [[Visual Fatigue]]:''' By reducing the VAC, NELFDs can potentially decrease eye strain, headaches, and [[simulator sickness]] associated with prolonged use of conventional stereoscopic displays. | * '''Reduced [[Visual Fatigue]]:''' By reducing the VAC, NELFDs can potentially decrease eye strain, headaches, and [[simulator sickness]] associated with prolonged use of conventional stereoscopic displays<ref name="Hoffman2008" />. | ||
* '''Smoother [[Parallax]]:''' Can provide more continuous motion parallax as the viewer moves their eye slightly within the eyebox. | * '''Smoother [[Parallax]]:''' Can provide more continuous motion parallax as the viewer moves their eye slightly within the eyebox. | ||
* '''Wider [[Eye Box]] (Potentially):''' Some lightfield display designs can offer a larger eyebox compared to conventional NED designs with small exit pupils, although this often involves trade-offs. | * '''Wider [[Eye Box]] (Potentially):''' Some lightfield display designs can offer a larger eyebox compared to conventional NED designs with small exit pupils, although this often involves trade-offs. | ||
* '''Potential for [[Prescription]] Correction:''' Some lightfield approaches might computationally correct for the viewer's refractive errors (like myopia or hyperopia), although this is an active area of research. | * '''Potential for [[Prescription]] Correction:''' Some lightfield approaches might computationally correct for the viewer's refractive errors (like myopia or hyperopia), although this is an active area of research<ref name=" Pamplona2012">Pamplona, V. F., Oliveira, M. M., Aliaga, D. G., & Raskar, R. (2012). Tailored displays to compensate for visual aberrations. ''ACM Transactions on Graphics (TOG)'', 31(4), Article 99. Presented at SIGGRAPH 2012.</ref>. | ||
== Challenges == | == Challenges == | ||
* '''Resolution Trade-off:''' Often a fundamental trade-off between spatial resolution (perceived sharpness) and angular resolution (number of distinct directions/depths). Increasing angular resolution often reduces spatial resolution for a given display panel and optical system (spatio-angular resolution trade-off). | * '''Resolution Trade-off:''' Often a fundamental trade-off between spatial resolution (perceived sharpness) and angular resolution (number of distinct directions/depths). Increasing angular resolution often reduces spatial resolution for a given display panel and optical system (spatio-angular resolution trade-off)<ref name="Lanman2013" />. | ||
* '''Computational Complexity:''' Generating the complex image data required for a lightfield display (lightfield rendering or hologram computation) is computationally intensive, requiring significant [[GPU]] power, high [[bandwidth]], and sophisticated algorithms. Real-time rendering for interactive applications is a major hurdle. | * '''Computational Complexity:''' Generating the complex image data required for a lightfield display (lightfield rendering or hologram computation) is computationally intensive, requiring significant [[GPU]] power, high [[bandwidth]], and sophisticated algorithms. Real-time rendering for interactive applications is a major hurdle<ref name="Maimone2017" />. | ||
* '''[[Form Factor]] and [[Weight]]:''' Implementing the necessary optics (MLAs, multiple layers, SLMs, scanning systems, varifocal mechanisms) within the strict size and weight constraints of a wearable device is difficult, often leading to bulkier designs. | * '''[[Form Factor]] and [[Weight]]:''' Implementing the necessary optics (MLAs, multiple layers, SLMs, scanning systems, varifocal mechanisms) within the strict size and weight constraints of a wearable device is difficult, often leading to bulkier designs. | ||
* '''[[Field of View (FoV)]]:''' Achieving a wide field of view simultaneously with high spatial resolution, high angular resolution, a large eyebox, and compact form factor is extremely challenging. | * '''[[Field of View (FoV)]]:''' Achieving a wide field of view simultaneously with high spatial resolution, high angular resolution, a large eyebox, and compact form factor is extremely challenging. | ||
* '''[[Brightness]] and [[Contrast ratio|Contrast Ratio]]:''' Some approaches, particularly those involving multiple layers, masks, MLAs, or diffractive elements, can suffer from reduced light efficiency (lower brightness) and potentially lower contrast compared to direct-view displays. | * '''[[Brightness]] and [[Contrast ratio|Contrast Ratio]]:''' Some approaches, particularly those involving multiple layers, masks, MLAs, or diffractive elements, can suffer from reduced light efficiency (lower brightness) and potentially lower contrast compared to direct-view displays. | ||
* '''[[Artifacts]]:''' Specific implementations can suffer from unique visual artifacts, such as [[Speckle pattern|speckle]] in holographic systems, latency or visible plane-switching in varifocal systems, diffraction effects, or image discontinuities at the edge of the eyebox. | * '''[[Artifacts]]:''' Specific implementations can suffer from unique visual artifacts, such as [[Speckle pattern|speckle]] in holographic systems<ref name="Maimone2017" />, latency or visible plane-switching in varifocal systems, diffraction effects, or image discontinuities at the edge of the eyebox. | ||
* '''Calibration:''' Precise manufacturing, alignment, and calibration of the optical components and display panels are critical and often complex. | * '''Calibration:''' Precise manufacturing, alignment, and calibration of the optical components and display panels are critical and often complex. | ||
== Historical Development and Notable Examples == | == Historical Development and Notable Examples == | ||
While the underlying concept of capturing and displaying light fields dates back to Gabriel Lippmann's [[Integral imaging|integral photography]] (1908), focused development on near-eye versions intensified with the rise of modern VR/AR and the identification of the VAC problem (around 2008). | While the underlying concept of capturing and displaying light fields dates back to Gabriel Lippmann's [[Integral imaging|integral photography]] (1908)<ref name="Lippmann1908">Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief [Reversible proofs giving the sensation of relief]. ''Journal de Physique Théorique et Appliquée'', 7(1), 821–825.</ref>, focused development on near-eye versions intensified with the rise of modern VR/AR and the identification of the VAC problem (around 2008)<ref name="Hoffman2008" />. | ||
Key milestones and prototypes include: | Key milestones and prototypes include: | ||
* '''Stanford / NVIDIA Light Field Stereoscope (2015):''' An early HMD demonstration using two stacked LCD layers to provide accommodation cues over a continuous range (0.2m to infinity) within a ~30° FoV. | * '''Stanford / NVIDIA Light Field Stereoscope (2015):''' An early HMD demonstration using two stacked LCD layers to provide accommodation cues over a continuous range (0.2m to infinity) within a ~30° FoV<ref name="Huang2015" />. | ||
* '''NVIDIA / UNC Holographic HMD (2017):''' Showcased a prototype using a 2k x 2k phase SLM and GPU computation to generate real-time holograms at 90 Hz with an 80° FoV. | * '''NVIDIA / UNC Holographic HMD (2017):''' Showcased a prototype using a 2k x 2k phase SLM and GPU computation to generate real-time holograms at 90 Hz with an 80° FoV<ref name="Maimone2017" />. | ||
* '''Avegant Light Field Technology (2017 onwards):''' Demonstrated mixed reality prototypes using multiple simultaneous focal planes (~2-3 planes, ~40° FoV). | * '''Avegant Light Field Technology (2017 onwards):''' Demonstrated mixed reality prototypes using multiple simultaneous focal planes (~2-3 planes, ~40° FoV)<ref name="AvegantBlog2017">Avegant (2017, January 4). Avegant Demonstrates Light Field Technology For Mixed Reality Experiences. [Blog post]. Retrieved from [Insert plausible Avegant blog/news URL here if found, otherwise describe]</ref>. | ||
* '''[[Magic Leap]] One (2018):''' The first widely available commercial HMD marketed with lightfield concepts ("photonic lightfield chip"), implemented using waveguides providing two fixed focal planes (~0.5m and infinity) over a ~50° diagonal FoV. | * '''[[Magic Leap]] One (2018):''' The first widely available commercial HMD marketed with lightfield concepts ("photonic lightfield chip"), implemented using waveguides providing two fixed focal planes (~0.5m and infinity) over a ~50° diagonal FoV<ref name="MagicLeapSpecs">Magic Leap (n.d.). Magic Leap 1: Technical Specs. Retrieved from [Insert official Magic Leap specs URL here if found, otherwise describe source like tech reviews]</ref>. | ||
* '''[[Meta Reality Labs Research]] (formerly Facebook Reality Labs) Half-Dome Series (2018-2020):''' A series of research prototypes exploring varifocal displays. Half-Dome 1 used mechanical actuation; later versions like Half-Dome 3 used a stack of liquid crystal lenses to achieve 64 discrete focal planes electronically, combined with [[eye tracking]] and a wide FoV (~140°). | * '''[[Meta Reality Labs Research]] (formerly Facebook Reality Labs) Half-Dome Series (2018-2020):''' A series of research prototypes exploring varifocal displays. Half-Dome 1 used mechanical actuation; later versions like Half-Dome 3 used a stack of liquid crystal lenses to achieve 64 discrete focal planes electronically, combined with [[eye tracking]] and a wide FoV (~140°)<ref name="AbrashBlog2019">Abrash, M. (2019, September 25). Oculus Connect 6: The Future of VR. [Blog post]. Meta Quest Blog. Retrieved from [Insert relevant Meta blog post URL, e.g., describing Half Dome 3]</ref>. | ||
* '''CREAL (2020 onwards):''' A company developing compact lightfield display engines for AR, often using time-multiplexed micro-LED projection or scanning combined with holographic combiners. Prototypes aim for continuous focus (e.g., 0.15m to infinity) within a ~50-60° FoV in a glasses-like form factor. | * '''CREAL (2020 onwards):''' A company developing compact lightfield display engines for AR, often using time-multiplexed micro-LED projection or scanning combined with holographic combiners. Prototypes aim for continuous focus (e.g., 0.15m to infinity) within a ~50-60° FoV in a glasses-like form factor<ref name="CrealWebsite">CREAL (n.d.). Technology. Retrieved from [Insert CREAL technology page URL here]</ref>. | ||
== Applications == | == Applications == | ||
Line 88: | Line 88: | ||
* [[Metasurface]] | * [[Metasurface]] | ||
* [[Emerging Technologies]] | * [[Emerging Technologies]] | ||
== References == | |||
<references /> | |||
[[Category:Display Technology]] | [[Category:Display Technology]] |