Near-eye lightfield display: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
Line 8: | Line 8: | ||
Common technical approaches to generating the lightfield include: | Common technical approaches to generating the lightfield include: | ||
* '''[[Microlens Array]] (MLA) based:''' An array of tiny lenses is placed over a high-resolution [[display panel]] (like an [[OLED]] or [[LCD]]). Each microlens samples a portion of the underlying pixels and projects them in specific directions, creating different views for different parts of the eye's pupil. This technique is related to [[integral imaging]] or [[plenoptic]] camera principles<ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ''ACM Transactions on Graphics (TOG)'', 32(4), Article 138. Presented at SIGGRAPH 2013.</ref>, but inherently trades [[spatial resolution]] for [[angular resolution]]. | * '''[[Microlens Array]] (MLA) based:''' An array of tiny lenses is placed over a high-resolution [[display panel]] (like an [[OLED]] or [[LCD]]). Each microlens samples a portion of the underlying pixels and projects them in specific directions, creating different views for different parts of the eye's pupil. This technique is related to [[integral imaging]] or [[plenoptic]] camera principles<ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ''ACM Transactions on Graphics (TOG)'', 32(4), Article 138. Presented at SIGGRAPH 2013. [https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf PDF Link]</ref>, but inherently trades [[spatial resolution]] for [[angular resolution]] (i.e., number of views or depth cues). | ||
* '''Multi-layer Displays:''' Using multiple stacked, typically transparent, display layers (e.g., LCDs) that multiplicatively modulate light passing through them. By controlling the patterns on each layer, often using [[computational display]] techniques, the directional light distribution can be approximated, potentially offering more continuous focus cues<ref name="Huang2015">Huang, F. C., Wetzstein, G., Barsky, B. A., & Heide, F. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ''ACM Transactions on Graphics (TOG)'', 34(4), Article 60. Presented at SIGGRAPH 2015.</ref>. | * '''Multi-layer Displays:''' Using multiple stacked, typically transparent, display layers (e.g., LCDs) that multiplicatively modulate light passing through them. By controlling the patterns on each layer, often using [[computational display]] techniques, the directional light distribution can be approximated, potentially offering more continuous focus cues<ref name="Huang2015">Huang, F. C., Wetzstein, G., Barsky, B. A., & Heide, F. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ''ACM Transactions on Graphics (TOG)'', 34(4), Article 60. Presented at SIGGRAPH 2015.</ref>. | ||
* '''Varifocal / Multifocal Displays:''' Using optics whose focal length can be changed rapidly, such as [[Tunable lens|tunable lenses]] or mechanically actuated lenses/displays. These systems present images at different focal distances sequentially (time-multiplexed) or simultaneously. The visual system integrates these into a perception of depth, approximating a lightfield effect, particularly addressing accommodation<ref name="Akşit2019">Akşit, K., Lopes, W., Kim, J., Shirley, P., & Luebke, D. (2019). Manufacturing application-driven near-eye displays by combining 3D printing and thermoforming. ''ACM Transactions on Graphics (TOG)'', 38(6), Article 183. Presented at SIGGRAPH Asia 2019. (Discusses varifocal elements)</ref>. | * '''Varifocal / Multifocal Displays:''' Using optics whose focal length can be changed rapidly, such as [[Tunable lens|tunable lenses]] or mechanically actuated lenses/displays. These systems present images at different focal distances sequentially (time-multiplexed) or simultaneously. The visual system integrates these into a perception of depth, approximating a lightfield effect, particularly addressing accommodation<ref name="Akşit2019">Akşit, K., Lopes, W., Kim, J., Shirley, P., & Luebke, D. (2019). Manufacturing application-driven near-eye displays by combining 3D printing and thermoforming. ''ACM Transactions on Graphics (TOG)'', 38(6), Article 183. Presented at SIGGRAPH Asia 2019. (Discusses varifocal elements)</ref>. | ||
Line 20: | Line 20: | ||
* '''Reduced [[Visual Fatigue]]:''' By reducing the VAC, NELFDs can potentially decrease eye strain, headaches, and [[simulator sickness]] associated with prolonged use of conventional stereoscopic displays<ref name="Hoffman2008" />. | * '''Reduced [[Visual Fatigue]]:''' By reducing the VAC, NELFDs can potentially decrease eye strain, headaches, and [[simulator sickness]] associated with prolonged use of conventional stereoscopic displays<ref name="Hoffman2008" />. | ||
* '''Smoother [[Parallax]]:''' Can provide more continuous motion parallax as the viewer moves their eye slightly within the eyebox. | * '''Smoother [[Parallax]]:''' Can provide more continuous motion parallax as the viewer moves their eye slightly within the eyebox. | ||
* '''Thinner/Lighter Form Factor (Potentially):''' MLA-based designs, for example, can replace bulky magnifying optics with compact microlens arrays, potentially leading to thinner and lighter HMDs<ref name="Lanman2013" />. | |||
* '''Wider [[Eye Box]] (Potentially):''' Some lightfield display designs can offer a larger eyebox compared to conventional NED designs with small exit pupils, although this often involves trade-offs. | * '''Wider [[Eye Box]] (Potentially):''' Some lightfield display designs can offer a larger eyebox compared to conventional NED designs with small exit pupils, although this often involves trade-offs. | ||
* '''Potential for [[Prescription]] Correction:''' Some lightfield approaches might computationally correct for the viewer's refractive errors (like myopia or hyperopia), | * '''Potential for [[Prescription]] Correction:''' Some lightfield approaches might computationally correct for the viewer's refractive errors (like myopia or hyperopia), a potential benefit demonstrated in early prototypes and an active area of research<ref name="Pamplona2012">Pamplona, V. F., Oliveira, M. M., Aliaga, D. G., & Raskar, R. (2012). Tailored displays to compensate for visual aberrations. ''ACM Transactions on Graphics (TOG)'', 31(4), Article 99. Presented at SIGGRAPH 2012.</ref><ref name="Lanman2013" />. | ||
== Challenges == | == Challenges == | ||
* '''Resolution Trade-off:''' Often a fundamental trade-off between spatial resolution (perceived sharpness) and angular resolution (number of distinct directions/depths). | * '''Resolution Trade-off:''' Often a fundamental trade-off between spatial resolution (perceived sharpness) and angular resolution (number of distinct directions/depths). For MLA-based systems, the perceived resolution is reduced relative to the native microdisplay resolution, often proportional to the ratio of the lens focal length to the eye relief<ref name="Lanman2013" />. | ||
* '''Computational Complexity:''' Generating the complex image data required for a lightfield display (lightfield rendering or hologram computation) is computationally intensive, requiring significant [[GPU]] power, high [[bandwidth]], and sophisticated algorithms. Real-time rendering for interactive applications is a major hurdle<ref name="Maimone2017" />. | * '''Computational Complexity:''' Generating the complex image data required for a lightfield display (lightfield rendering or hologram computation) is computationally intensive, requiring significant [[GPU]] power, high [[bandwidth]], and sophisticated algorithms. Real-time rendering for interactive applications is a major hurdle<ref name="Maimone2017" />. Techniques like GPU-accelerated ray tracing or specialized rasterization are often employed<ref name="Lanman2013" />. | ||
* '''[[Form Factor]] and [[Weight]]:''' | * '''[[Form Factor]] and [[Weight]]:''' While some approaches promise thinner designs, implementing the necessary optics (MLAs, multiple layers, SLMs, scanning systems, varifocal mechanisms) within strict wearable constraints remains difficult. | ||
* '''[[Field of View (FoV)]]:''' Achieving a wide field of view simultaneously with high spatial resolution, high angular resolution, a large eyebox, and compact form factor is extremely challenging. | * '''[[Field of View (FoV)]]:''' Achieving a wide field of view simultaneously with high spatial resolution, high angular resolution, a large eyebox, and compact form factor is extremely challenging. | ||
* '''[[Brightness]] and [[Contrast ratio|Contrast Ratio]]:''' Some approaches, particularly those involving multiple layers, masks, MLAs, or diffractive elements, can suffer from reduced light efficiency (lower brightness) and potentially lower contrast compared to direct-view displays. | * '''[[Brightness]] and [[Contrast ratio|Contrast Ratio]]:''' Some approaches, particularly those involving multiple layers, masks, MLAs, or diffractive elements, can suffer from reduced light efficiency (lower brightness) and potentially lower contrast compared to direct-view displays. | ||
* '''[[Artifacts]]:''' Specific implementations can suffer from unique visual artifacts, such as [[Speckle pattern|speckle]] in holographic systems<ref name="Maimone2017" />, latency or visible plane-switching in varifocal systems, diffraction effects, or image discontinuities at the edge of the eyebox. | * '''[[Artifacts]]:''' Specific implementations can suffer from unique visual artifacts, such as [[Speckle pattern|speckle]] in holographic systems<ref name="Maimone2017" />, latency or visible plane-switching in varifocal systems, diffraction effects from small features, MLA boundary effects, or image discontinuities at the edge of the eyebox. | ||
* '''Calibration:''' Precise manufacturing, alignment, and calibration of the optical components and display panels are critical and often complex. | * '''Calibration:''' Precise manufacturing, alignment (rotational and lateral), and calibration of the optical components and display panels are critical and often complex, potentially requiring software correction<ref name="Lanman2013" />. | ||
Recent reviews discuss ongoing research to overcome these challenges through advancements in display technology and computational techniques<ref name="Nature2024">[Naked-eye light field display technology based on mini/micro light emitting diode panels: a systematic review and meta-analysis | Scientific Reports](https://www.nature.com/articles/s41598-024-75172-z)</ref><ref name="Frontiers2022">[Frontiers | Challenges and Advancements for AR Optical See-Through Near-Eye Displays: A Review](https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2022.838237/full)</ref>. | Recent reviews discuss ongoing research to overcome these challenges through advancements in display technology and computational techniques<ref name="Nature2024">[Naked-eye light field display technology based on mini/micro light emitting diode panels: a systematic review and meta-analysis | Scientific Reports](https://www.nature.com/articles/s41598-024-75172-z)</ref><ref name="Frontiers2022">[Frontiers | Challenges and Advancements for AR Optical See-Through Near-Eye Displays: A Review](https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2022.838237/full)</ref>. | ||
Line 38: | Line 39: | ||
Key milestones and prototypes include: | Key milestones and prototypes include: | ||
* '''NVIDIA Near-Eye Light Field Display | * '''NVIDIA Near-Eye Light Field Display (2013):''' Presented by Lanman and Luebke (NVIDIA Research) at SIGGRAPH 2013, this influential prototype used [[Microlens Array|microlens arrays]] (Fresnel #630) over high-resolution [[OLED]] microdisplays (Sony ECX332A, 1280x720 per eye, 12µm pixel pitch) to demonstrate accurate focus cues resolving the VAC in a thin form factor (1cm eyepiece thickness). It also showed the potential for software-based [[Prescription|prescription correction]] and analyzed the spatio-angular trade-offs (achieving ~146x78 pixel resolution at ~29°x16° FoV in the demo)<ref name="Lanman2013" />. | ||
* '''Stanford / NVIDIA Light Field Stereoscope (2015):''' An | * '''Stanford / NVIDIA Light Field Stereoscope (2015):''' An HMD demonstration using two stacked LCD layers to provide accommodation cues over a continuous range (0.2m to infinity) within a ~30° FoV<ref name="Huang2015" />. | ||
* '''NVIDIA / UNC Holographic HMD (2017):''' Showcased a prototype using a 2k x 2k phase SLM and GPU computation to generate real-time holograms at 90 Hz with an 80° FoV<ref name="Maimone2017" />. | * '''NVIDIA / UNC Holographic HMD (2017):''' Showcased a prototype using a 2k x 2k phase SLM and GPU computation to generate real-time holograms at 90 Hz with an 80° FoV<ref name="Maimone2017" />. | ||
* '''Avegant Light Field Technology (2017 onwards):''' Demonstrated mixed reality prototypes using multiple simultaneous focal planes (~2-3 planes, ~40° FoV)<ref name="AvegantBlog2017">Avegant (2017, January 4). Avegant Demonstrates Light Field Technology For Mixed Reality Experiences. https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html</ref>. | * '''Avegant Light Field Technology (2017 onwards):''' Demonstrated mixed reality prototypes using multiple simultaneous focal planes (~2-3 planes, ~40° FoV)<ref name="AvegantBlog2017">Avegant (2017, January 4). Avegant Demonstrates Light Field Technology For Mixed Reality Experiences. PR Newswire. https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html</ref>. | ||
* '''[[Magic Leap]] One (2018):''' The first widely available commercial HMD marketed with lightfield concepts ("photonic lightfield chip"), implemented using waveguides providing two fixed focal planes (~0.5m and infinity) over a ~50° diagonal FoV<ref name="MagicLeapSpecs"> | * '''[[Magic Leap]] One (2018):''' The first widely available commercial HMD marketed with lightfield concepts ("photonic lightfield chip"), implemented using waveguides providing two fixed focal planes (~0.5m and infinity) over a ~50° diagonal FoV<ref name="MagicLeapSpecs">Based on technical specifications and reviews published circa 2018-2019. Original spec links may be defunct. Example review: UploadVR (2018). Magic Leap One Creator Edition In-Depth Review. [Link to reliable review or archived spec sheet if available]</ref>. | ||
* '''[[Meta Reality Labs Research]] (formerly Facebook Reality Labs) Half-Dome Series (2018-2020):''' A series of research prototypes exploring varifocal displays. Half-Dome 1 used mechanical actuation; later versions like Half-Dome 3 used a stack of liquid crystal lenses to achieve 64 discrete focal planes electronically, combined with [[eye tracking]] and a wide FoV (~140°)<ref name="AbrashBlog2019">Abrash, M. (2019, September 25). Oculus Connect 6 | * '''[[Meta Reality Labs Research]] (formerly Facebook Reality Labs) Half-Dome Series (2018-2020):''' A series of research prototypes exploring varifocal displays. Half-Dome 1 used mechanical actuation; later versions like Half-Dome 3 used a stack of liquid crystal lenses to achieve 64 discrete focal planes electronically, combined with [[eye tracking]] and a wide FoV (~140°)<ref name="AbrashBlog2019">Abrash, M. (2019, September 25). Oculus Connect 6 Keynote [Video]. YouTube. Retrieved from https://www.youtube.com/watch?v=7YIGT13bdXw (Relevant discussion on Half-Dome prototypes)</ref>. | ||
* '''CREAL (2020 onwards):''' A company developing compact lightfield display engines for AR, often using time-multiplexed micro-LED projection or scanning combined with holographic combiners. Prototypes aim for continuous focus (e.g., 0.15m to infinity) within a ~50-60° FoV in a glasses-like form factor<ref name="CrealWebsite">CREAL (n.d.). Technology. Retrieved from https://creal.com/technology/</ref>. | * '''CREAL (2020 onwards):''' A company developing compact lightfield display engines for AR, often using time-multiplexed micro-LED projection or scanning combined with holographic combiners. Prototypes aim for continuous focus (e.g., 0.15m to infinity) within a ~50-60° FoV in a glasses-like form factor<ref name="CrealWebsite">CREAL (n.d.). Technology. Retrieved from https://creal.com/technology/</ref>. | ||
Line 57: | Line 58: | ||
Ongoing research focuses on: | Ongoing research focuses on: | ||
* Developing novel display panels (e.g., high-pixel-density | * Developing novel display panels (e.g., high-pixel-density [[MicroLED|microLEDs]] or OLEDs) and optics (HOEs, metasurfaces, advanced lens designs, potentially curved MLAs<ref name="Lanman2013"/>) to improve the spatio-angular resolution trade-off. | ||
* Creating more efficient lightfield rendering algorithms, potentially using [[Artificial intelligence|AI]] / [[Machine learning|machine learning]] for reconstruction or up-sampling, and dedicated [[hardware acceleration]]. | * Creating more efficient lightfield rendering algorithms, potentially using [[Artificial intelligence|AI]] / [[Machine learning|machine learning]] for reconstruction or up-sampling, and dedicated [[hardware acceleration]]. | ||
* Integrating high-speed, high-accuracy [[eye tracking]] to enable [[foveated rendering]] adapted for lightfields or dynamic optimization of the display based on gaze, | * Integrating high-speed, high-accuracy [[eye tracking]] to enable [[foveated rendering]] adapted for lightfields or dynamic optimization of the display based on gaze, potentially relaxing eyebox constraints or improving resolution/computational load. | ||
* Exploiting redundancy in lightfields for error correction (e.g., compensating for dead pixels)<ref name="Lanman2013"/>. | |||
* Hybrid approaches combining several techniques (e.g., a few focal planes with some angular diversity per plane) to achieve a "good enough" lightfield effect with current technology. | * Hybrid approaches combining several techniques (e.g., a few focal planes with some angular diversity per plane) to achieve a "good enough" lightfield effect with current technology. | ||
Longer-term advances in | Longer-term advances in display panels, ultrafast SLMs, efficient computational methods, and compact diffractive or [[Metasurface|metasurface]] optics hold the potential for true continuous lightfield displays in lightweight, eyeglass-sized hardware, potentially making digital imagery optically much closer to viewing the real world. | ||
== See Also == | == See Also == |