Light field display: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
Line 2: | Line 2: | ||
'''Light Field Display''' (LFD) is an advanced visualization technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic">G. Wetzstein, “Computational Displays: Achieving the Full Plenoptic Function,” ACM SIGGRAPH 2020 Courses, Association for Computing Machinery, 2020. Available: https://s2020.siggraph.org/program/?sess=computational-displays-achieving-the-full-plenoptic-function (accessed 30 Apr 2025).</ref> Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.<ref name="WetzsteinTensor">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576</ref> This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).<ref name="LookingGlass27">Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27</ref><ref name="LeiaVerge">Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we've ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024</ref> | '''Light Field Display''' (LFD) is an advanced visualization technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic">G. Wetzstein, “Computational Displays: Achieving the Full Plenoptic Function,” ACM SIGGRAPH 2020 Courses, Association for Computing Machinery, 2020. Available: https://s2020.siggraph.org/program/?sess=computational-displays-achieving-the-full-plenoptic-function (accessed 30 Apr 2025).</ref> Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.<ref name="WetzsteinTensor">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576</ref> This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).<ref name="LookingGlass27">Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27</ref><ref name="LeiaVerge">Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we've ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024</ref> | ||
This technology is considered crucial for the future of [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) because it can directly address the [[Vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview">Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.</ref> By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current HMDs. | This technology is considered crucial for the future of [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) because it can directly address the [[Vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview">Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.</ref> By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current HMDs.<ref name="CrealWebsite">CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/</ref> | ||
== Definition and Principles == | == Definition and Principles == | ||
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.<ref name="WetzsteinPlenoptic"/> In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).<ref name="WetzsteinPlenoptic"/><ref name="Huang2014EyeglassesFree">Huang, F. C., Wetzstein, G., Barsky, B. A., & Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122</ref> | A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.<ref name="WetzsteinPlenoptic"/> In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).<ref name="WetzsteinPlenoptic"/><ref name="Huang2014EyeglassesFree">Huang, F. C., Wetzstein, G., Barsky, B. A., & Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122</ref> | ||
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer's eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]). | By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer's eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).<ref name="CrealWebsite"/> This recreation allows users to experience: | ||
* Full motion [[parallax]] (horizontal and vertical look-around).<ref name="LeiaVerge"/> | * Full motion [[parallax]] (horizontal and vertical look-around).<ref name="LeiaVerge"/> | ||
* Accurate [[occlusion]] cues. | * Accurate [[occlusion]] cues. | ||
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].<ref name="WiredVAC | * Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].<ref name="WiredVAC"/> | ||
* [[Specular highlights]] and realistic reflections that change with viewpoint. | * [[Specular highlights]] and realistic reflections that change with viewpoint. | ||
* Often, viewing without specialized eyewear (especially in non-headset formats).<ref name="LookingGlass27"/> | * Often, viewing without specialized eyewear (especially in non-headset formats).<ref name="LookingGlass27"/> | ||
Line 17: | Line 17: | ||
* '''Glasses-Free 3D:''' Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> | * '''Glasses-Free 3D:''' Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> | ||
* '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.<ref name="LeiaVerge"/> | * '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.<ref name="LeiaVerge"/> | ||
* '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism. | * '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse">D. Lanman and D. Luebke, “Near-Eye Light Field Displays for Virtual and Augmented Reality,” ACM SIGGRAPH 2020 Courses, Association for Computing Machinery, 2020. Available: https://s2020.siggraph.org/program/?sess=near-eye-light-field-displays (accessed 30 Apr 2025).</ref> | ||
* '''Computational Requirements:''' Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.<ref name="LeiaVerge"/><ref name="Huang2014EyeglassesFree"/> | * '''Computational Requirements:''' Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.<ref name="LeiaVerge"/><ref name="Huang2014EyeglassesFree"/> | ||
* '''Resolution Trade-offs:''' A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is often referred to as the spatio-angular resolution trade-off. | * '''Resolution Trade-offs:''' A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is often referred to as the spatio-angular resolution trade-off. | ||
Line 49: | Line 49: | ||
== Types of Light Field Displays == | == Types of Light Field Displays == | ||
* '''Near-Eye Light Field Displays:''' Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions. | * '''Near-Eye Light Field Displays:''' Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"/> Examples include research prototypes from NVIDIA<ref name="NvidiaNELD"/> and academic groups,<ref name="Huang2015Stereoscope">Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943</ref> and commercial modules from companies like [[CREAL]].<ref name="CrealRoadToVR"/> Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | ||
* '''Tabletop/Desktop Displays:''' Provide glasses-free 3D for individual or small group viewing. Used for professional visualization, gaming, communication, and content creation.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> [[Looking Glass Factory]] is a key player here, offering various sizes like the Looking Glass Portrait and the larger Looking Glass 27".<ref name="LookingGlass27"/><ref name="LookingGlassSoftware">Looking Glass Factory. Software Overview. Retrieved from https://lookingglassfactory.com/software</ref> [[Leia Inc.]] also targets this market with monitor and mobile displays.<ref name="LeiaVerge"/> Typically use MLA or barrier technology. | * '''Tabletop/Desktop Displays:''' Provide glasses-free 3D for individual or small group viewing. Used for professional visualization, gaming, communication, and content creation.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> [[Looking Glass Factory]] is a key player here, offering various sizes like the Looking Glass Portrait and the larger Looking Glass 27".<ref name="LookingGlass27"/><ref name="LookingGlassSoftware">Looking Glass Factory. Software Overview. Retrieved from https://lookingglassfactory.com/software</ref> [[Leia Inc.]] also targets this market with monitor and mobile displays.<ref name="LeiaVerge"/> Typically use MLA or barrier technology. | ||
* '''Large Format / Tiled Displays:''' Aimed at creating large-scale, immersive "holographic" experiences without glasses for public venues, command centers, or collaborative environments.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR">Light Field Lab Press Release (2021, October 7). Light Field Lab Demonstrates SolidLight™, the Highest Resolution Holographic Display Platform Ever Designed. Retrieved from https://www.lightfieldlab.com/press/light-field-lab-demonstrates-solidlight</ref> [[Light Field Lab]]'s SolidLight™ platform uses modular panels designed to be tiled into large video walls.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR"/> Sony's ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.<ref name="SonyELFSR2">Sony Professional. Sony's Spatial Reality Display. Retrieved from https://pro.sony/ue_US/products/professional-displays/elf-sr2</ref> | * '''Large Format / Tiled Displays:''' Aimed at creating large-scale, immersive "holographic" experiences without glasses for public venues, command centers, or collaborative environments.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR">Light Field Lab Press Release (2021, October 7). Light Field Lab Demonstrates SolidLight™, the Highest Resolution Holographic Display Platform Ever Designed. Retrieved from https://www.lightfieldlab.com/press/light-field-lab-demonstrates-solidlight</ref> [[Light Field Lab]]'s SolidLight™ platform uses modular panels designed to be tiled into large video walls.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR"/> Sony's ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.<ref name="SonyELFSR2">Sony Professional. Sony's Spatial Reality Display. Retrieved from https://pro.sony/ue_US/products/professional-displays/elf-sr2</ref> | ||
Line 111: | Line 111: | ||
==Applications== | ==Applications== | ||
===Applications in VR and AR=== | ===Applications in VR and AR=== | ||
* '''Enhanced Realism and Immersion:''' Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions. | * '''Enhanced Realism and Immersion:''' Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.<ref name="CrealRoadToVR"/> | ||
* '''Improved Visual Comfort:''' Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.<ref name="WiredVAC"/><ref name="CrealWebsite"/> | * '''Improved Visual Comfort:''' Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.<ref name="WiredVAC"/><ref name="CrealWebsite"/> | ||
* '''Natural Interaction:''' Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.<ref name="CrealRoadToVR"/> | * '''Natural Interaction:''' Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.<ref name="CrealRoadToVR"/> |