Near-eye light field display: Difference between revisions
Appearance
Xinreality (talk | contribs) m Text replacement - "e.g.," to "for example" |
Xinreality (talk | contribs) No edit summary |
||
(One intermediate revision by the same user not shown) | |||
Line 8: | Line 8: | ||
A '''Near-eye lightfield display''' (NELFD) is a type of [[Near-eye display]] (NED), often implemented in a [[Head-mounted display]] (HMD), designed to reproduce a [[lightfield]], the complete set of light rays filling a region of space, rather than just a single flat [[image]] for the viewer. The concept of the light field, representing light rays at every point traveling in every direction (often described as a 4D function), emerged in computer graphics and vision research in the 1990s.<ref name="LightFieldForum2013">[Refocus your Eyes: Nvidia presents Near-Eye Light Field Display Prototype | LightField Forum](http://lightfield-forum.com/2013/07/refocus-your-eyes-nvidia-presents-near-eye-light-field-display-prototype/)</ref> Unlike conventional displays which typically emit light [[Isotropy|isotropically]] from each pixel location on a fixed plane, a light field display aims to "support the control of tightly-clustered bundles of light rays, modulating radiance as a function of position and direction across its surface."<ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ''ACM Transactions on Graphics (TOG)'', 32(4), Article 138. Presented at SIGGRAPH 2013. [https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf PDF Link]</ref> | A '''Near-eye lightfield display''' (NELFD) is a type of [[Near-eye display]] (NED), often implemented in a [[Head-mounted display]] (HMD), designed to reproduce a [[lightfield]], the complete set of light rays filling a region of space, rather than just a single flat [[image]] for the viewer. The concept of the light field, representing light rays at every point traveling in every direction (often described as a 4D function), emerged in computer graphics and vision research in the 1990s.<ref name="LightFieldForum2013">[Refocus your Eyes: Nvidia presents Near-Eye Light Field Display Prototype | LightField Forum](http://lightfield-forum.com/2013/07/refocus-your-eyes-nvidia-presents-near-eye-light-field-display-prototype/)</ref> Unlike conventional displays which typically emit light [[Isotropy|isotropically]] from each pixel location on a fixed plane, a light field display aims to "support the control of tightly-clustered bundles of light rays, modulating radiance as a function of position and direction across its surface."<ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ''ACM Transactions on Graphics (TOG)'', 32(4), Article 138. Presented at SIGGRAPH 2013. [https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf PDF Link]</ref> | ||
By emitting light rays with potentially correct spatial and angular distribution, a NELFD allows the viewer’s [[eye]]s to engage natural [[Vergence|vergence]] and [[Accommodation (visual)|accommodation]] (focusing) responses simultaneously. This capability aims to resolve the [[vergence-accommodation conflict]] (VAC), a common source of visual discomfort (including [[visual fatigue]], eye strain, and headaches) in conventional [[stereoscopic]] displays used in [[virtual reality]] (VR) and [[augmented reality]] (AR).<ref name="Hoffman2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. ''Journal of Vision'', 8(3), 33. doi:10.1167/8.3.33</ref><ref name="StanfordVid2015">Stanford Computational Imaging Lab (2015). The Light Field Stereoscope - SIGGGRAPH 2015 [Video]. Retrieved from https://www.youtube.com/watch?v=YJdMPUF8cDM</ref> Resolving the VAC can lead to potentially sharper, more comfortable, and more realistic three-dimensional visual experiences, especially during extended use. As Huang et al. (2015) noted, “correct or nearly correct focus cues significantly improve stereoscopic correspondence matching, 3D shape perception becomes more veridical, and people can discriminate different depths better.”<ref name="Huang2015">Huang, | By emitting light rays with potentially correct spatial and angular distribution, a NELFD allows the viewer’s [[eye]]s to engage natural [[Vergence|vergence]] and [[Accommodation (visual)|accommodation]] (focusing) responses simultaneously. This capability aims to resolve the [[vergence-accommodation conflict]] (VAC), a common source of visual discomfort (including [[visual fatigue]], eye strain, and headaches) in conventional [[stereoscopic]] displays used in [[virtual reality]] (VR) and [[augmented reality]] (AR).<ref name="Hoffman2008">Hoffman, D. M., Girshick, A. R., Akeley, K., & Banks, M. S. (2008). Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. ''Journal of Vision'', 8(3), 33. doi:10.1167/8.3.33</ref><ref name="StanfordVid2015">Stanford Computational Imaging Lab (2015). The Light Field Stereoscope - SIGGGRAPH 2015 [Video]. Retrieved from https://www.youtube.com/watch?v=YJdMPUF8cDM</ref> Resolving the VAC can lead to potentially sharper, more comfortable, and more realistic three-dimensional visual experiences, especially during extended use. As Huang et al. (2015) noted, “correct or nearly correct focus cues significantly improve stereoscopic correspondence matching, 3D shape perception becomes more veridical, and people can discriminate different depths better.”<ref name="Huang2015">Huang, F.‑C., Wetzstein, G., Barsky, B. A., & Heide, F. (2015). The light‑field stereoscope: immersive computer graphics via factored near‑eye light‑field displays with focus cues. ''ACM Transactions on Graphics'', 34(4), Article 60. https://doi.org/10.1145/2766922</ref> | ||
Near-eye displays confront the fundamental problem that the unaided human eye cannot easily accommodate (focus) on display panels placed in very close proximity.<ref name="Lanman2013"/> Conventional NEDs typically use magnifying optics to create a virtual image appearing further away, but these optics can add bulk and weight.<ref name="TI_NED_WP">Bhakta, V.R., Richuso, J. and Jain, A. (2014). DLP ® Technology for Near Eye Display. Texas Instruments White Paper DLPA051. Retrieved from http://www.ti.com/lit/wp/dlpa051/dlpa051.pdf</ref> Light field approaches offer an alternative path to achieving comfortable viewing, potentially in thinner and lighter form factors. | Near-eye displays confront the fundamental problem that the unaided human eye cannot easily accommodate (focus) on display panels placed in very close proximity.<ref name="Lanman2013"/> Conventional NEDs typically use magnifying optics to create a virtual image appearing further away, but these optics can add bulk and weight.<ref name="TI_NED_WP">Bhakta, V.R., Richuso, J. and Jain, A. (2014). DLP ® Technology for Near Eye Display. Texas Instruments White Paper DLPA051. Retrieved from http://www.ti.com/lit/wp/dlpa051/dlpa051.pdf</ref> Light field approaches offer an alternative path to achieving comfortable viewing, potentially in thinner and lighter form factors. | ||
Line 19: | Line 19: | ||
* '''[[Microlens Array]] (MLA) based:''' An array of tiny lenses is placed over a high-resolution [[display panel]] (like an [[OLED]] or [[LCD]]). Each microlens covers multiple underlying pixels and projects their light in specific directions, creating slightly different views for different parts of the eye's pupil. This technique, related to [[integral imaging]] or [[plenoptic camera]] principles,<ref name="Lanman2013"/> effectively samples the light field but inherently trades [[spatial resolution]] for [[angular resolution]] (i.e., the number of distinct views or depth cues provided). | * '''[[Microlens Array]] (MLA) based:''' An array of tiny lenses is placed over a high-resolution [[display panel]] (like an [[OLED]] or [[LCD]]). Each microlens covers multiple underlying pixels and projects their light in specific directions, creating slightly different views for different parts of the eye's pupil. This technique, related to [[integral imaging]] or [[plenoptic camera]] principles,<ref name="Lanman2013"/> effectively samples the light field but inherently trades [[spatial resolution]] for [[angular resolution]] (i.e., the number of distinct views or depth cues provided). | ||
* '''Multi-layer Displays:''' Using multiple stacked, typically transparent, display layers (for example LCDs) that multiplicatively modulate light passing through them. By computing and displaying specific patterns on each layer, often using [[computational display]] techniques, the directional light distribution of a target light field can be approximated. This approach can potentially offer more continuous focus cues over larger depth ranges compared to methods with discrete views.<ref name="Huang2015"/> | * '''Multi-layer Displays:''' Using multiple stacked, typically transparent, display layers (for example LCDs) that multiplicatively modulate light passing through them. By computing and displaying specific patterns on each layer, often using [[computational display]] techniques, the directional light distribution of a target light field can be approximated. This approach can potentially offer more continuous focus cues over larger depth ranges compared to methods with discrete views.<ref name="Huang2015"/> | ||
* '''Varifocal / Multifocal Displays:''' Using optical elements whose focal length can be changed rapidly, such as [[Tunable lens|tunable lenses]], [[Deformable mirror]]s, or mechanically actuated lenses/displays. These systems present images focused at different distances sequentially (time-multiplexed) or simultaneously (multifocal). The visual system integrates these rapidly presented focal planes into a perception of depth with corresponding accommodation cues, effectively approximating a lightfield effect.<ref name="Akşit2019">Akşit, | * '''Varifocal / Multifocal Displays:''' Using optical elements whose focal length can be changed rapidly, such as [[Tunable lens|tunable lenses]], [[Deformable mirror]]s, or mechanically actuated lenses/displays. These systems present images focused at different distances sequentially (time-multiplexed) or simultaneously (multifocal). The visual system integrates these rapidly presented focal planes into a perception of depth with corresponding accommodation cues, effectively approximating a lightfield effect.<ref name="Akşit2019">Akşit, K., Chakravarthula, P., Rathinavel, K., Jeong, Y., Albert, R., Fuchs, H., & Luebke, D. (2019). Manufacturing application‑driven foveated near‑eye displays. ''IEEE Transactions on Visualization and Computer Graphics'', 25(5), 1928‑1939. https://doi.org/10.1109/TVCG.2019.2898781</ref> | ||
* '''Scanning / Projection:''' Using highly collimated light sources like [[laser]]s combined with fast scanning [[mirror]]s (such as [[MEMS]] mirrors) or specialized projection [[optics]] to directly synthesize the lightfield, drawing rays point-by-point or line-by-line towards the eye's pupil.<ref name="Schowengerdt2015">Schowengerdt, | * '''Scanning / Projection:''' Using highly collimated light sources like [[laser]]s combined with fast scanning [[mirror]]s (such as [[MEMS]] mirrors) or specialized projection [[optics]] to directly synthesize the lightfield, drawing rays point-by-point or line-by-line towards the eye's pupil.<ref name="Schowengerdt2015">Schowengerdt, B. T., & Seibel, E. J. (2015). *True 3‑D scanned voxel displays using single or multiple light sources*. U.S. Patent 9,025,213 B2. Retrieved from https://patents.google.com/patent/US9025213B2</ref> | ||
* '''[[Holographic display|Holographic]] Approaches:''' Using a [[Spatial light modulator]] (SLM), typically modulating the phase of coherent light (usually from a laser), to reconstruct the optical [[wavefront]] of the desired 3D scene via [[Diffraction (physics)|diffraction]]. This can potentially provide highly accurate focus cues across a continuous depth range but faces significant challenges like [[Speckle pattern|speckle]] artifacts and extremely high computational requirements for real-time hologram generation.<ref name="Maimone2017">Maimone, | * '''[[Holographic display|Holographic]] Approaches:''' Using a [[Spatial light modulator]] (SLM), typically modulating the phase of coherent light (usually from a laser), to reconstruct the optical [[wavefront]] of the desired 3D scene via [[Diffraction (physics)|diffraction]]. This can potentially provide highly accurate focus cues across a continuous depth range but faces significant challenges like [[Speckle pattern|speckle]] artifacts and extremely high computational requirements for real-time hologram generation.<ref name="Maimone2017">Maimone, A., Lanman, D., Rathinavel, K., Keller, K., Luebke, D., & Fuchs, H. (2017). Holographic near‑eye displays for virtual and augmented reality. ''ACM Transactions on Graphics'', 36(4), Article 85. https://doi.org/10.1145/3072959.3073624</ref> [[Holographic optical element]]s (HOEs) or [[Metasurface]]s can also be used to manipulate light directionally, often combined with a microdisplay. | ||
* '''Computational Approaches:''' Combining specialized optics (which might not fit neatly into the above categories) with sophisticated [[rendering]] algorithms. This can include techniques like compressive light field displays, which aim to represent the light field more efficiently, or optimizing light patterns projected onto diffusers or other specialized optical elements.<ref name="Wetzstein2012">Wetzstein, | * '''Computational Approaches:''' Combining specialized optics (which might not fit neatly into the above categories) with sophisticated [[rendering]] algorithms. This can include techniques like compressive light field displays, which aim to represent the light field more efficiently, or optimizing light patterns projected onto diffusers or other specialized optical elements.<ref name="Wetzstein2012">Wetzstein, G., Luebke, D., & Heidrich, W. (2012). Hand‑held computational light‑field photography and display. ''IEEE Computer Graphics and Applications'', 32(1), 8‑13. https://doi.org/10.1109/MCG.2012.15</ref> | ||
==Advantages== | ==Advantages== | ||
Line 33: | Line 33: | ||
* '''Smoother [[Parallax]]:''' Can provide more continuous motion parallax as the viewer moves their eye slightly within the eyebox, enhancing the sense of realism. | * '''Smoother [[Parallax]]:''' Can provide more continuous motion parallax as the viewer moves their eye slightly within the eyebox, enhancing the sense of realism. | ||
* '''Wider [[Eye Box]] (Potentially):''' Some lightfield display designs might offer a larger eyebox compared to conventional NEDs that rely on projecting into a small exit pupil, increasing user comfort and tolerance to HMD positioning. However, achieving a wide eyebox often involves trade-offs with other parameters like resolution or brightness. | * '''Wider [[Eye Box]] (Potentially):''' Some lightfield display designs might offer a larger eyebox compared to conventional NEDs that rely on projecting into a small exit pupil, increasing user comfort and tolerance to HMD positioning. However, achieving a wide eyebox often involves trade-offs with other parameters like resolution or brightness. | ||
* '''Potential for [[Prescription]] Correction:''' The ability to precisely control light rays allows some lightfield approaches to computationally pre-distort the displayed image to correct for the viewer's specific refractive errors (like [[myopia]] or [[hyperopia]]), potentially eliminating the need for users to wear prescription glasses with the HMD.<ref name="Pamplona2012">Pamplona, | * '''Potential for [[Prescription]] Correction:''' The ability to precisely control light rays allows some lightfield approaches to computationally pre-distort the displayed image to correct for the viewer's specific refractive errors (like [[myopia]] or [[hyperopia]]), potentially eliminating the need for users to wear prescription glasses with the HMD.<ref name="Pamplona2012">Pamplona, V. F., Oliveira, M. M., Aliaga, D. G., & Raskar, R. (2012). Tailored displays to compensate for visual aberrations. ''ACM Transactions on Graphics'', 31(4), Article 99. https://doi.org/10.1145/2185520.2185577</ref><ref name="Lanman2013" /> | ||
* '''See-Through Capability:''' Like conventional NEDs, lightfield optics can be designed for [[Augmented reality|augmented]] or [[Mixed reality|mixed reality]] applications, allowing virtual light fields to be overlaid onto the real world. | * '''See-Through Capability:''' Like conventional NEDs, lightfield optics can be designed for [[Augmented reality|augmented]] or [[Mixed reality|mixed reality]] applications, allowing virtual light fields to be overlaid onto the real world. | ||
Line 54: | Line 54: | ||
Key research milestones and notable prototypes/products include: | Key research milestones and notable prototypes/products include: | ||
* '''NVIDIA Near-Eye Light Field Display (2013):''' Presented by [[Doug Lanman|Lanman]] and Luebke (NVIDIA Research) at SIGGRAPH 2013, this influential prototype (Figure 1) used [[Microlens Array|microlens arrays]] (Fresnel #630) placed over high-resolution [[OLED]] microdisplays (Sony ECX332A, 1280x720 per eye, 12µm pixel pitch, ~2100 ppi). It clearly demonstrated the ability to provide accurate focus cues resolving the VAC within a thin form factor (1cm eyepiece thickness). Users reported being able to focus at different depths, although perceived spatial resolution was reduced (~146x78 pixels in the ~29°x16° FoV demo configuration). The prototype also showcased software-based [[Prescription|prescription correction]] and used GPU-accelerated rendering.<ref name="Lanman2013" /><ref name="LightFieldForum2013"/><ref name="EngadgetNVIDIA2013">Steele, | * '''NVIDIA Near-Eye Light Field Display (2013):''' Presented by [[Doug Lanman|Lanman]] and Luebke (NVIDIA Research) at SIGGRAPH 2013, this influential prototype (Figure 1) used [[Microlens Array|microlens arrays]] (Fresnel #630) placed over high-resolution [[OLED]] microdisplays (Sony ECX332A, 1280x720 per eye, 12µm pixel pitch, ~2100 ppi). It clearly demonstrated the ability to provide accurate focus cues resolving the VAC within a thin form factor (1cm eyepiece thickness). Users reported being able to focus at different depths, although perceived spatial resolution was reduced (~146x78 pixels in the ~29°x16° FoV demo configuration). The prototype also showcased software-based [[Prescription|prescription correction]] and used GPU-accelerated rendering.<ref name="Lanman2013" /><ref name="LightFieldForum2013"/><ref name="EngadgetNVIDIA2013">Steele, B. (2013, July 24). NVIDIA Research’s near‑eye light‑field display prototype eyes‑on (video). ''Engadget''. Retrieved from https://www.engadget.com/2013-07-24-nvidia-research-near-eye-light-field-display-prototype.html</ref> | ||
* '''Stanford / NVIDIA Light Field Stereoscope (2015):''' A follow-up collaboration presented at SIGGRAPH 2015 (Figure 2), this HMD used two stacked [[LCD]] panels per eye, driven by non-negative light field factorization algorithms. Based on [[Charles Wheatstone|Wheatstone's]] stereoscope design, it aimed to provide more continuous focus cues over a larger depth range (0.2m to infinity demonstrated) compared to the MLA approach, within a ~30° FoV. The goal was a practical, potentially inexpensive technology supporting focus cues in a wearable form factor, though the prototype's effective resolution per eye (approx. 640x800) was limited.<ref name="Huang2015" /><ref name="StanfordVid2015"/><ref name="WCCFTechNVIDIA2015">Pirzada, U. (2015, August 13). Nvidia Working On Next Generation Virtual Reality Device; The Near Eye Light Field Stereoscope, Aims To Bring It To The Market by 2018. WCCFTech. Retrieved from http://wccftech.com/nvidia-virtual-reality-device-light-field-2018/</ref> NVIDIA presented a revision at the VRLA Expo in 2016, emphasizing real-time CUDA-based light field computation and the potential to reduce motion sickness.<ref name="FudzillaNVIDIA2016">Worrel, J. (2016, January 30). Nvidia shows off its 'Light Field' VR headset at VRLA 2016. Fudzilla. Retrieved from http://www.fudzilla.com/news/graphics/39762-nvidia-shows-off-its-light-field-vr-headset-at-vrla-2016</ref> | * '''Stanford / NVIDIA Light Field Stereoscope (2015):''' A follow-up collaboration presented at SIGGRAPH 2015 (Figure 2), this HMD used two stacked [[LCD]] panels per eye, driven by non-negative light field factorization algorithms. Based on [[Charles Wheatstone|Wheatstone's]] stereoscope design, it aimed to provide more continuous focus cues over a larger depth range (0.2m to infinity demonstrated) compared to the MLA approach, within a ~30° FoV. The goal was a practical, potentially inexpensive technology supporting focus cues in a wearable form factor, though the prototype's effective resolution per eye (approx. 640x800) was limited.<ref name="Huang2015" /><ref name="StanfordVid2015"/><ref name="WCCFTechNVIDIA2015">Pirzada, U. (2015, August 13). Nvidia Working On Next Generation Virtual Reality Device; The Near Eye Light Field Stereoscope, Aims To Bring It To The Market by 2018. WCCFTech. Retrieved from http://wccftech.com/nvidia-virtual-reality-device-light-field-2018/</ref> NVIDIA presented a revision at the VRLA Expo in 2016, emphasizing real-time CUDA-based light field computation and the potential to reduce motion sickness.<ref name="FudzillaNVIDIA2016">Worrel, J. (2016, January 30). Nvidia shows off its 'Light Field' VR headset at VRLA 2016. Fudzilla. Retrieved from http://www.fudzilla.com/news/graphics/39762-nvidia-shows-off-its-light-field-vr-headset-at-vrla-2016</ref> | ||
* '''NVIDIA / UNC Holographic HMD (2017):''' Researchers from NVIDIA and the University of North Carolina demonstrated a holographic near-eye display using a high-resolution (2k x 2k) phase-only SLM. It achieved real-time hologram synthesis on a GPU at 90 Hz over an 80° FoV, showcasing the potential of holography for accurate wavefront reconstruction and focus cues, while also highlighting the associated computational challenges and speckle issues.<ref name="Maimone2017" /> | * '''NVIDIA / UNC Holographic HMD (2017):''' Researchers from NVIDIA and the University of North Carolina demonstrated a holographic near-eye display using a high-resolution (2k x 2k) phase-only SLM. It achieved real-time hologram synthesis on a GPU at 90 Hz over an 80° FoV, showcasing the potential of holography for accurate wavefront reconstruction and focus cues, while also highlighting the associated computational challenges and speckle issues.<ref name="Maimone2017" /> | ||
* '''Avegant Light Field Technology (2017 onwards):''' Avegant demonstrated mixed reality display prototypes based on providing multiple simultaneous focal planes (reportedly 2–3 planes) within an approximately 40° FoV, aiming to address VAC in AR.<ref name="AvegantBlog2017">Avegant (2017, March 16). Avegant Introduces Light Field Technology For Mixed Reality Experiences. PR Newswire. https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html</ref> | * '''Avegant Light Field Technology (2017 onwards):''' Avegant demonstrated mixed reality display prototypes based on providing multiple simultaneous focal planes (reportedly 2–3 planes) within an approximately 40° FoV, aiming to address VAC in AR.<ref name="AvegantBlog2017">Avegant (2017, March 16). Avegant Introduces Light Field Technology For Mixed Reality Experiences. PR Newswire. https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html</ref> | ||
* '''[[Magic Leap]] One (2018):''' Launched as the "Creator Edition", this was the first widely marketed commercial AR HMD explicitly referencing lightfield concepts (using the term "photonic lightfield chip"). Its actual implementation relied on [[Waveguide (optics)|waveguides]] presenting imagery at two fixed focal planes (approximately 0.5m and infinity), offering a limited form of multifocal display rather than a full lightfield, over a diagonal FoV of about 50°.<ref name="MagicLeapSpecs"> | * '''[[Magic Leap]] One (2018):''' Launched as the "Creator Edition", this was the first widely marketed commercial AR HMD explicitly referencing lightfield concepts (using the term "photonic lightfield chip"). Its actual implementation relied on [[Waveguide (optics)|waveguides]] presenting imagery at two fixed focal planes (approximately 0.5m and infinity), offering a limited form of multifocal display rather than a full lightfield, over a diagonal FoV of about 50°.<ref name="MagicLeapSpecs">Hamilton, I. (2018, August 15). Magic Leap One Creator Edition-In‑depth review. ''UploadVR''. Archived at https://web.archive.org/web/20180816062346/https://uploadvr.com/magic-leap-one-review/</ref> | ||
* '''[[Meta Reality Labs Research]] Half-Dome Series (2018-2020):''' Meta (formerly Facebook) showcased a series of advanced varifocal VR research prototypes. Half-Dome 1 used mechanical actuation to move the display. Half-Dome 3 employed an electronic solution using a stack of liquid crystal lenses capable of rapidly switching between 64 discrete focal planes, combined with [[eye tracking]] to present the correct focus based on gaze, achieving a wide FoV (~140°).<ref name="AbrashBlog2019">Abrash, M. (2019, September 25). Oculus Connect 6 Keynote [Video]. YouTube. Retrieved from https://www.youtube.com/watch?v=7YIGT13bdXw (Relevant discussion on Half-Dome prototypes)</ref> | * '''[[Meta Reality Labs Research]] Half-Dome Series (2018-2020):''' Meta (formerly Facebook) showcased a series of advanced varifocal VR research prototypes. Half-Dome 1 used mechanical actuation to move the display. Half-Dome 3 employed an electronic solution using a stack of liquid crystal lenses capable of rapidly switching between 64 discrete focal planes, combined with [[eye tracking]] to present the correct focus based on gaze, achieving a wide FoV (~140°).<ref name="AbrashBlog2019">Abrash, M. (2019, September 25). Oculus Connect 6 Keynote [Video]. YouTube. Retrieved from https://www.youtube.com/watch?v=7YIGT13bdXw (Relevant discussion on Half-Dome prototypes)</ref> | ||
* '''CREAL (2020 onwards):''' This Swiss startup focuses on developing compact lightfield display engines, primarily for AR glasses. Their approach often involves time-multiplexed projection (using sources like micro-LEDs) or scanning combined with holographic combiners to generate many views, aiming for continuous focus cues (for example 0.15m to infinity demonstrated) within a ~50-60° FoV in a form factor suitable for eyeglasses.<ref name="CrealWebsite">CREAL (n.d.). Technology. Retrieved from https://creal.com/technology/</ref> | * '''CREAL (2020 onwards):''' This Swiss startup focuses on developing compact lightfield display engines, primarily for AR glasses. Their approach often involves time-multiplexed projection (using sources like micro-LEDs) or scanning combined with holographic combiners to generate many views, aiming for continuous focus cues (for example 0.15m to infinity demonstrated) within a ~50-60° FoV in a form factor suitable for eyeglasses.<ref name="CrealWebsite">CREAL (n.d.). Technology. Retrieved from https://creal.com/technology/</ref> |