Light field display: Difference between revisions
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
(9 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{ | {{see also|Terms|Technical Terms}} | ||
'''Light Field Display''' (LFD) is an advanced visualization technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic">Wetzstein, | '''Light Field Display''' (LFD) is an advanced visualization technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic">G. Wetzstein, “Computational Displays: Achieving the Full Plenoptic Function,” ACM SIGGRAPH 2020 Courses, Association for Computing Machinery, 2020. Available: https://s2020.siggraph.org/program/?sess=computational-displays-achieving-the-full-plenoptic-function (accessed 30 Apr 2025).</ref> Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.<ref name="WetzsteinTensor">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576</ref> This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).<ref name="LookingGlass27">Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27</ref><ref name="LeiaVerge">Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we've ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024</ref> | ||
This technology is considered crucial for the future of [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) because it can directly address the [[Vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview"> | This technology is considered crucial for the future of [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) because it can directly address the [[Vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview">Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.</ref> By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current HMDs.<ref name="CrealWebsite">CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/</ref> | ||
== Definition and Principles == | == Definition and Principles == | ||
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.<ref name="WetzsteinPlenoptic"/> In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).<ref name="WetzsteinPlenoptic"/><ref name="Huang2014EyeglassesFree">Huang, F. C., Wetzstein, G., Barsky, B. A., & Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122</ref> | A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.<ref name="WetzsteinPlenoptic"/> In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).<ref name="WetzsteinPlenoptic"/><ref name="Huang2014EyeglassesFree">Huang, F. C., Wetzstein, G., Barsky, B. A., & Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122</ref> | ||
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer's eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]). | By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer's eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).<ref name="CrealWebsite"/> This recreation allows users to experience: | ||
* Full motion [[parallax]] (horizontal and vertical look-around).<ref name="LeiaVerge"/> | * Full motion [[parallax]] (horizontal and vertical look-around).<ref name="LeiaVerge"/> | ||
* Accurate [[occlusion]] cues. | * Accurate [[occlusion]] cues. | ||
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].<ref name="WiredVAC | * Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].<ref name="WiredVAC"/> | ||
* [[Specular highlights]] and realistic reflections that change with viewpoint. | * [[Specular highlights]] and realistic reflections that change with viewpoint. | ||
* Often, viewing without specialized eyewear (especially in non-headset formats).<ref name="LookingGlass27"/> | * Often, viewing without specialized eyewear (especially in non-headset formats).<ref name="LookingGlass27"/> | ||
Line 34: | Line 17: | ||
* '''Glasses-Free 3D:''' Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> | * '''Glasses-Free 3D:''' Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> | ||
* '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.<ref name="LeiaVerge"/> | * '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.<ref name="LeiaVerge"/> | ||
* '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism. | * '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse">D. Lanman and D. Luebke, “Near-Eye Light Field Displays for Virtual and Augmented Reality,” ACM SIGGRAPH 2020 Courses, Association for Computing Machinery, 2020. Available: https://s2020.siggraph.org/program/?sess=near-eye-light-field-displays (accessed 30 Apr 2025).</ref> | ||
* '''Computational Requirements:''' Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.<ref name="LeiaVerge"/><ref name="Huang2014EyeglassesFree"/> | * '''Computational Requirements:''' Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.<ref name="LeiaVerge"/><ref name="Huang2014EyeglassesFree"/> | ||
* '''Resolution Trade-offs:''' A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is often referred to as the spatio-angular resolution trade-off. | * '''Resolution Trade-offs:''' A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is often referred to as the spatio-angular resolution trade-off. | ||
== History and Development == | ==History and Development== | ||
===Early Concepts and Foundations=== | |||
The underlying concept can be traced back to Michael Faraday's 1846 suggestion of light as a field<ref name="FaradayField">Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field</ref> and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.<ref name="Gershun1936">Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon & G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).</ref> The practical groundwork for reproducing light fields was laid by Gabriel Lippmann's 1908 concept of [[Integral imaging|Integral Photography]] ("photographie intégrale"), which used an array of small lenses to capture and reproduce light fields.<ref name="Lippmann1908">Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100</ref> The modern computational understanding was significantly advanced by Adelson and Bergen's formalization of the [[Plenoptic Function]] in 1991.<ref name="AdelsonBergen1991">Adelson, E. H., & Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy & J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.</ref> | The underlying concept can be traced back to Michael Faraday's 1846 suggestion of light as a field<ref name="FaradayField">Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field</ref> and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.<ref name="Gershun1936">Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon & G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).</ref> The practical groundwork for reproducing light fields was laid by Gabriel Lippmann's 1908 concept of [[Integral imaging|Integral Photography]] ("photographie intégrale"), which used an array of small lenses to capture and reproduce light fields.<ref name="Lippmann1908">Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100</ref> The modern computational understanding was significantly advanced by Adelson and Bergen's formalization of the [[Plenoptic Function]] in 1991.<ref name="AdelsonBergen1991">Adelson, E. H., & Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy & J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.</ref> | ||
===Key Development Milestones=== | |||
* '''1908:''' Gabriel Lippmann introduces integral photography.<ref name="Lippmann1908"/> | * '''1908:''' Gabriel Lippmann introduces integral photography.<ref name="Lippmann1908"/> | ||
* '''1936:''' Andrey Gershun formalizes the light field mathematically.<ref name="Gershun1936"/> | * '''1936:''' Andrey Gershun formalizes the light field mathematically.<ref name="Gershun1936"/> | ||
Line 63: | Line 46: | ||
* '''[[Waveguide]] Optics:''' Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.<ref name="LightFieldLabTech">Light Field Lab. SolidLight Platform. Retrieved from https://www.lightfieldlab.com/solidlight</ref><ref name="Maimone2017HolographicNED">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624</ref> This is explored for compact AR/VR systems. | * '''[[Waveguide]] Optics:''' Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.<ref name="LightFieldLabTech">Light Field Lab. SolidLight Platform. Retrieved from https://www.lightfieldlab.com/solidlight</ref><ref name="Maimone2017HolographicNED">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624</ref> This is explored for compact AR/VR systems. | ||
* '''Time-Multiplexed Displays:''' Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.<ref name="Liu2014OSTHMD">Liu, S., Cheng, D., & Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403</ref> | * '''Time-Multiplexed Displays:''' Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.<ref name="Liu2014OSTHMD">Liu, S., Cheng, D., & Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403</ref> | ||
* '''Holographic and Diffractive Approaches:''' While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.<ref name="SpringerReview2021">Martínez-Corral, | * '''Holographic and Diffractive Approaches:''' While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.<ref name="SpringerReview2021">M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.</ref> Some companies use "holographic" terminology for their high-density LFDs.<ref name="ForbesLightField">C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).</ref> | ||
== Types of Light Field Displays == | == Types of Light Field Displays == | ||
* '''Near-Eye Light Field Displays:''' Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions. | * '''Near-Eye Light Field Displays:''' Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"/> Examples include research prototypes from NVIDIA<ref name="NvidiaNELD"/> and academic groups,<ref name="Huang2015Stereoscope">Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943</ref> and commercial modules from companies like [[CREAL]].<ref name="CrealRoadToVR"/> Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | ||
* '''Tabletop/Desktop Displays:''' Provide glasses-free 3D for individual or small group viewing. Used for professional visualization, gaming, communication, and content creation.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> [[Looking Glass Factory]] is a key player here, offering various sizes like the Looking Glass Portrait and the larger Looking Glass 27".<ref name="LookingGlass27"/><ref name="LookingGlassSoftware">Looking Glass Factory. Software Overview. Retrieved from https://lookingglassfactory.com/software</ref> [[Leia Inc.]] also targets this market with monitor and mobile displays.<ref name="LeiaVerge"/> Typically use MLA or barrier technology. | * '''Tabletop/Desktop Displays:''' Provide glasses-free 3D for individual or small group viewing. Used for professional visualization, gaming, communication, and content creation.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> [[Looking Glass Factory]] is a key player here, offering various sizes like the Looking Glass Portrait and the larger Looking Glass 27".<ref name="LookingGlass27"/><ref name="LookingGlassSoftware">Looking Glass Factory. Software Overview. Retrieved from https://lookingglassfactory.com/software</ref> [[Leia Inc.]] also targets this market with monitor and mobile displays.<ref name="LeiaVerge"/> Typically use MLA or barrier technology. | ||
* '''Large Format / Tiled Displays:''' Aimed at creating large-scale, immersive "holographic" experiences without glasses for public venues, command centers, or collaborative environments.<ref name=" | * '''Large Format / Tiled Displays:''' Aimed at creating large-scale, immersive "holographic" experiences without glasses for public venues, command centers, or collaborative environments.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR">Light Field Lab Press Release (2021, October 7). Light Field Lab Demonstrates SolidLight™, the Highest Resolution Holographic Display Platform Ever Designed. Retrieved from https://www.lightfieldlab.com/press/light-field-lab-demonstrates-solidlight</ref> [[Light Field Lab]]'s SolidLight™ platform uses modular panels designed to be tiled into large video walls.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR"/> Sony's ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.<ref name="SonyELFSR2">Sony Professional. Sony's Spatial Reality Display. Retrieved from https://pro.sony/ue_US/products/professional-displays/elf-sr2</ref> | ||
== Comparison with Other 3D Display Technologies == | == Comparison with Other 3D Display Technologies == | ||
| Technology | {| class="wikitable" | ||
|- | |+ Comparison of Key 3D Display Technology Characteristics | ||
| '''Light Field | ! Technology | ||
| '''Stereoscopic Displays''' | Yes | ! Glasses Required | ||
| '''Autostereoscopic (non-LFD)''' | No | ! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]]) | ||
| '''[[Volumetric Display]]''' | ! Full Motion [[Parallax]] | ||
| '''Holographic Displays''' | ! Typical [[Field of view|View Field]] | ||
! Key Trade-offs | |||
|- | |||
| '''[[Light Field Display]]''' | |||
| No (often) | |||
| Yes | |||
| Yes | |||
| Limited to Wide | |||
| Spatio-angular resolution trade-off, computation needs | |||
|- | |||
| '''[[Stereoscopic display|Stereoscopic Displays]]''' | |||
| Yes | |||
| No | |||
| No <small>(requires head tracking)</small> | |||
| Wide | |||
| VAC causes fatigue, requires glasses | |||
|- | |||
| '''[[Autostereoscopic display|Autostereoscopic (non-LFD)]]''' | |||
| No | |||
| No | |||
| Limited <small>(often Horizontal only)</small> | |||
| Limited | |||
| Reduced resolution per view, fixed viewing zones | |||
|- | |||
| '''[[Volumetric Display]]''' | |||
| No | |||
| Yes | |||
| Yes | |||
| 360° potential | |||
| Limited resolution, transparency/opacity issues, bulk | |||
|- | |||
| '''[[Holographic display|Holographic Displays]]''' | |||
| No | |||
| Yes | |||
| Yes | |||
| Often Limited | |||
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically | |||
|} | |||
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> | LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> | ||
Line 84: | Line 104: | ||
Creating content compatible with LFDs requires capturing or generating directional view information: | Creating content compatible with LFDs requires capturing or generating directional view information: | ||
* '''[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:''' Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).<ref name="WetzsteinPlenoptic"/> The captured data can be processed for LFD playback. | * '''[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:''' Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).<ref name="WetzsteinPlenoptic"/> The captured data can be processed for LFD playback. | ||
* '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name=" | * '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField"/><ref name="LookingGlassSoftware"/> Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.<ref name="ForbesLightField"/><ref name="Mildenhall2020NeRF">Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24</ref> | ||
* '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views. | * '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views. | ||
* '''Existing 3D Content Conversion:''' Plugins and software tools (e.g., provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.<ref name="LookingGlassSoftware"/> | * '''Existing 3D Content Conversion:''' Plugins and software tools (e.g., provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.<ref name="LookingGlassSoftware"/> | ||
* '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/> | * '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/> | ||
== Applications == | ==Applications== | ||
===Applications in VR and AR=== | |||
* '''Enhanced Realism and Immersion:''' Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions. | * '''Enhanced Realism and Immersion:''' Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.<ref name="CrealRoadToVR"/> | ||
* '''Improved Visual Comfort:''' Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.<ref name="WiredVAC"/><ref name="CrealWebsite"/> | * '''Improved Visual Comfort:''' Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.<ref name="WiredVAC"/><ref name="CrealWebsite"/> | ||
* '''Natural Interaction:''' Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.<ref name="CrealRoadToVR"/> | * '''Natural Interaction:''' Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.<ref name="CrealRoadToVR"/> | ||
Line 97: | Line 117: | ||
* '''Vision Correction:''' Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user's refractive errors, eliminating the need for prescription glasses within the headset.<ref name="CrealRoadToVR"/><ref name="Huang2015Stereoscope"/> | * '''Vision Correction:''' Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user's refractive errors, eliminating the need for prescription glasses within the headset.<ref name="CrealRoadToVR"/><ref name="Huang2015Stereoscope"/> | ||
===Other Applications=== | |||
* '''Medical Imaging and Visualization:''' Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.<ref name="Nam2019Medical">Nam, J., McCormick, M., & Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785</ref> | * '''Medical Imaging and Visualization:''' Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.<ref name="Nam2019Medical">Nam, J., McCormick, M., & Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785</ref> | ||
* '''Scientific Visualization:''' Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.<ref name="Halle2017SciVis">Halle, M. W., & Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388</ref> | * '''Scientific Visualization:''' Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.<ref name="Halle2017SciVis">Halle, M. W., & Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388</ref> | ||
* '''[[Digital Signage]] and Advertising:''' Eye-catching glasses-free 3D displays for retail and public spaces.<ref name="LookingGlass27"/> | * '''[[Digital Signage]] and Advertising:''' Eye-catching glasses-free 3D displays for retail and public spaces.<ref name="LookingGlass27"/> | ||
* '''Product Design and Engineering (CAD/CAE):''' Collaborative visualization and review of 3D models.<ref name="Nam2019Medical"/> | * '''Product Design and Engineering (CAD/CAE):''' Collaborative visualization and review of 3D models.<ref name="Nam2019Medical"/> | ||
* '''Entertainment and Gaming:''' Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.<ref name=" | * '''Entertainment and Gaming:''' Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.<ref name="ForbesLightField"/> | ||
* '''Automotive Displays:''' [[Head-up display|Heads-up displays]] (HUDs) or dashboards presenting information at appropriate depths.<ref name="JDI_Parallax"/> | * '''Automotive Displays:''' [[Head-up display|Heads-up displays]] (HUDs) or dashboards presenting information at appropriate depths.<ref name="JDI_Parallax"/> | ||
* '''Telepresence and Communication:''' Creating realistic, life-sized 3D representations of remote collaborators, like Google's [[Project Starline]] concept.<ref name="Starline">Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/</ref> | * '''Telepresence and Communication:''' Creating realistic, life-sized 3D representations of remote collaborators, like Google's [[Project Starline]] concept.<ref name="Starline">Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/</ref> | ||
Line 109: | Line 129: | ||
== Challenges and Limitations == | == Challenges and Limitations == | ||
* '''Spatio-Angular Resolution Trade-off:''' Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> | * '''Spatio-Angular Resolution Trade-off:''' Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> | ||
* '''Computational Complexity & Bandwidth:''' Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.<ref name="LeiaVerge"/><ref name=" | * '''Computational Complexity & Bandwidth:''' Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.<ref name="LeiaVerge"/><ref name="ForbesLightField"/> | ||
* '''Manufacturing Complexity and Cost:''' Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.<ref name=" | * '''Manufacturing Complexity and Cost:''' Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.<ref name="ForbesLightField"/> | ||
* '''Form Factor and Miniaturization:''' Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | * '''Form Factor and Miniaturization:''' Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | ||
* '''Limited Field of View (FoV):''' Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.<ref name="Lanman2020NearEyeCourse"/> | * '''Limited Field of View (FoV):''' Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.<ref name="Lanman2020NearEyeCourse"/> | ||
Line 136: | Line 156: | ||
* '''Miniaturization for Wearables:''' Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.<ref name="CrealRoadToVR"/><ref name="SpringerReview2021"/> | * '''Miniaturization for Wearables:''' Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.<ref name="CrealRoadToVR"/><ref name="SpringerReview2021"/> | ||
* '''Improved Content Capture and Creation Tools:''' Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.<ref name="Mildenhall2020NeRF"/> | * '''Improved Content Capture and Creation Tools:''' Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.<ref name="Mildenhall2020NeRF"/> | ||
* '''Higher Resolution and Efficiency:''' Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (e.g., polarization multiplexing<ref name=" | * '''Higher Resolution and Efficiency:''' Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (e.g., polarization multiplexing<ref name="Tan2019Polarization">G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.</ref>), and display technologies. | ||
== See Also == | == See Also == | ||
Line 156: | Line 176: | ||
== References == | == References == | ||
<references/> | <references/> | ||
[[Category:Terms]] | |||
[[Category:Technical Terms]] | |||
[[Category:Display technology]] | |||
[[Category:3D display technology]] | |||
[[Category:Autostereoscopy]] | |||
[[Category:Virtual reality]] | |||
[[Category:Augmented reality]] | |||
[[Category:Optics]] | |||
[[Category:Computational photography]] | |||
[[Category:Emerging technologies]] | |||
[[Category:Human-computer interaction]] |
Latest revision as of 18:32, 29 April 2025
- See also: Terms and Technical Terms
Light Field Display (LFD) is an advanced visualization technology designed to reproduce a light field, the distribution of light rays in 3D space, including their intensity and direction.[1] Unlike conventional 2D displays or stereoscopic 3D systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.[2] This allows viewers to perceive genuine depth, parallax (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).[3][4]
This technology is considered crucial for the future of Virtual Reality (VR) and Augmented Reality (AR) because it can directly address the Vergence-accommodation conflict (VAC).[5][6] By providing correct focal cues that match the vergence information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and simulator sickness often associated with current HMDs.[7]
Definition and Principles
A light field display aims to replicate the Plenoptic Function, a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.[1] In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).[1][8]
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer's eyes to naturally focus (accommodation) at different depths within the displayed scene, matching the depth cues provided by binocular vision (vergence).[7] This recreation allows users to experience:
- Full motion parallax (horizontal and vertical look-around).[4]
- Accurate occlusion cues.
- Natural focal cues, mitigating the Vergence-accommodation conflict.[5]
- Specular highlights and realistic reflections that change with viewpoint.
- Often, viewing without specialized eyewear (especially in non-headset formats).[3]
Key Characteristics
- Glasses-Free 3D: Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.[3][4]
- Full Parallax: True LFDs provide both horizontal and vertical parallax, unlike earlier autostereoscopic technologies that often limited parallax to side-to-side movement.[4]
- Accommodation-Convergence Conflict Resolution: A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.[7][9]
- Computational Requirements: Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant GPU power and bandwidth.[4][8]
- Resolution Trade-offs: A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), field of view (FoV), and depth of field.[8][9] This is often referred to as the spatio-angular resolution trade-off.
History and Development
Early Concepts and Foundations
The underlying concept can be traced back to Michael Faraday's 1846 suggestion of light as a field[10] and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.[11] The practical groundwork for reproducing light fields was laid by Gabriel Lippmann's 1908 concept of Integral Photography ("photographie intégrale"), which used an array of small lenses to capture and reproduce light fields.[12] The modern computational understanding was significantly advanced by Adelson and Bergen's formalization of the Plenoptic Function in 1991.[13]
Key Development Milestones
- 1908: Gabriel Lippmann introduces integral photography.[12]
- 1936: Andrey Gershun formalizes the light field mathematically.[11]
- 1991: Adelson and Bergen formalize the plenoptic function.[13]
- 1996: Levoy and Hanrahan publish work on Light Field Rendering.[14]
- 2005: Stanford Multi-camera Array demonstrated for light field capture.[15]
- 2004-2008: Early computational light field displays developed (e.g., MIT Media Lab).[16]
- 2010-2013: Introduction of multilayer, compressive, and tensor light field display concepts.[17][2]
- 2013: NVIDIA demonstrates near-eye light field display prototype for VR.[18]
- 2015 onwards: Emergence of commercial LFD products (e.g., Looking Glass Factory, Leia Inc.) and advanced prototypes (e.g., Sony, CREAL, Light Field Lab).[3][4][19]
Technical Implementations (How They Work)
Light field displays use various techniques to generate the 4D light field:
- Microlens Arrays (MLAs): A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.[8][9] This is a common approach derived from integral imaging.[8] The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.[8]
- Multilayer Displays (Stacked LCDs): Several layers of transparent display panels (typically LCDs) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative spatial light modulator, shaping light from a backlight into a complex light field.[2][17] These are often explored for near-eye displays.[9]
- Directional Backlighting: A standard display panel (e.g., LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.[20]
- Projector Arrays: Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.[4]
- Parallax Barriers: An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.[21] Often less light-efficient than MLAs.
- Waveguide Optics: Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.[22][23] This is explored for compact AR/VR systems.
- Time-Multiplexed Displays: Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.[24]
- Holographic and Diffractive Approaches: While holographic displays reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.[25] Some companies use "holographic" terminology for their high-density LFDs.[26]
Types of Light Field Displays
- Near-Eye Light Field Displays: Integrated into VR/AR HMDs. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.[7][9] Examples include research prototypes from NVIDIA[18] and academic groups,[27] and commercial modules from companies like CREAL.[19] Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.[9][19]
- Tabletop/Desktop Displays: Provide glasses-free 3D for individual or small group viewing. Used for professional visualization, gaming, communication, and content creation.[3][4] Looking Glass Factory is a key player here, offering various sizes like the Looking Glass Portrait and the larger Looking Glass 27".[3][28] Leia Inc. also targets this market with monitor and mobile displays.[4] Typically use MLA or barrier technology.
- Large Format / Tiled Displays: Aimed at creating large-scale, immersive "holographic" experiences without glasses for public venues, command centers, or collaborative environments.[26][29] Light Field Lab's SolidLight™ platform uses modular panels designed to be tiled into large video walls.[26][29] Sony's ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.[30]
Comparison with Other 3D Display Technologies
Technology | Glasses Required | Natural Focal Cues (Solves VAC) | Full Motion Parallax | Typical View Field | Key Trade-offs |
---|---|---|---|---|---|
Light Field Display | No (often) | Yes | Yes | Limited to Wide | Spatio-angular resolution trade-off, computation needs |
Stereoscopic Displays | Yes | No | No (requires head tracking) | Wide | VAC causes fatigue, requires glasses |
Autostereoscopic (non-LFD) | No | No | Limited (often Horizontal only) | Limited | Reduced resolution per view, fixed viewing zones |
Volumetric Display | No | Yes | Yes | 360° potential | Limited resolution, transparency/opacity issues, bulk |
Holographic Displays | No | Yes | Yes | Often Limited | Extreme computational demands, speckle, small size typically |
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.[8][9]
Content Creation
Creating content compatible with LFDs requires capturing or generating directional view information:
- Light Field Cameras / Plenoptic Cameras: Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).[1] The captured data can be processed for LFD playback.
- Computer Graphics Rendering: Standard 3D scenes built in engines like Unity or Unreal Engine can be rendered from multiple viewpoints to generate the necessary data.[26][28] Specialized light field rendering techniques, potentially using ray tracing or neural methods like Neural Radiance Fields (NeRF), are employed.[26][31]
- Photogrammetry and 3D Scanning: Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.
- Existing 3D Content Conversion: Plugins and software tools (e.g., provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.[28]
- Focal Stack Conversion: Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.[2]
Applications
Applications in VR and AR
- Enhanced Realism and Immersion: Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.[19]
- Improved Visual Comfort: Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.[5][7]
- Natural Interaction: Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.[19]
- Seamless AR Integration: Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.
- Vision Correction: Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user's refractive errors, eliminating the need for prescription glasses within the headset.[19][27]
Other Applications
- Medical Imaging and Visualization: Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.[32]
- Scientific Visualization: Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.[33]
- Digital Signage and Advertising: Eye-catching glasses-free 3D displays for retail and public spaces.[3]
- Product Design and Engineering (CAD/CAE): Collaborative visualization and review of 3D models.[32]
- Entertainment and Gaming: Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.[26]
- Automotive Displays: Heads-up displays (HUDs) or dashboards presenting information at appropriate depths.[21]
- Telepresence and Communication: Creating realistic, life-sized 3D representations of remote collaborators, like Google's Project Starline concept.[34]
- Microscopy: Viewing microscopic samples with natural depth perception.[1]
Challenges and Limitations
- Spatio-Angular Resolution Trade-off: Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.[8][9]
- Computational Complexity & Bandwidth: Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.[4][26]
- Manufacturing Complexity and Cost: Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.[26]
- Form Factor and Miniaturization: Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.[9][19]
- Limited Field of View (FoV): Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.[9]
- Brightness and Efficiency: Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.
- Content Ecosystem: The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D.[28]
- Visual Artifacts: Potential issues include moiré effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.
Key Players and Commercial Landscape
Several companies and research groups are active in LFD development:
- Looking Glass Factory: Leader in desktop/tabletop glasses-free LFDs (Looking Glass Portrait, 27", 65") for creators and enterprises.[3][28]
- Leia Inc.: Develops LFD technology for mobile devices (e.g., Lume Pad), monitors, and automotive, often switchable between 2D and 3D LFD modes.[4] Acquired competitor Dimenco.
- CREAL: Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.[19]
- Light Field Lab: Developing large-scale, modular "holographic" LFD panels (SolidLight™) based on proprietary waveguide technology.[22][29]
- Sony: Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.[30]
- Avegant: Develops light field light engines, particularly for AR, focusing on VAC resolution.[35]
- Holografika: Offers glasses-free 3D LFD systems for professional applications.[36]
- Japan Display Inc. (JDI): Demonstrated prototype LFDs for various applications.[37]
- NVIDIA: Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.[18][9]
- Google: Research in LFDs, demonstrated through concepts like Project Starline.[34]
- Academic Research: Institutions like MIT Media Lab, Stanford University, University of Arizona, and others continue to push theoretical and practical boundaries.[2][8][9]
Future Directions and Research
- Computational Display Optimization: Using AI and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.[2] Using neural representations (like NeRF) for efficient light field synthesis and compression.[31]
- Varifocal and Multifocal Integration: Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.[9][24]
- Miniaturization for Wearables: Developing ultra-thin, efficient components using metasurfaces, holographic optical elements (HOEs), advanced waveguides, and MicroLED displays for integration into consumer AR/VR glasses.[19][25]
- Improved Content Capture and Creation Tools: Advancements in plenoptic cameras, AI-driven view synthesis, and streamlined software workflows.[31]
- Higher Resolution and Efficiency: Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (e.g., polarization multiplexing[38]), and display technologies.
See Also
- Light Field
- Plenoptic Function
- Integral imaging
- Autostereoscopic display
- Stereoscopy
- Holographic display
- Volumetric Display
- Varifocal display
- Vergence-accommodation conflict
- Virtual Reality
- Augmented Reality
- Head-mounted display
- Microlens array
- Spatial Light Modulator
References
- ↑ 1.0 1.1 1.2 1.3 1.4 G. Wetzstein, “Computational Displays: Achieving the Full Plenoptic Function,” ACM SIGGRAPH 2020 Courses, Association for Computing Machinery, 2020. Available: https://s2020.siggraph.org/program/?sess=computational-displays-achieving-the-full-plenoptic-function (accessed 30 Apr 2025).
- ↑ 2.0 2.1 2.2 2.3 2.4 2.5 Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576
- ↑ 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27
- ↑ 4.00 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.10 Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we've ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024
- ↑ 5.0 5.1 5.2 Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr
- ↑ Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.
- ↑ 7.0 7.1 7.2 7.3 7.4 CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/
- ↑ 8.0 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 Huang, F. C., Wetzstein, G., Barsky, B. A., & Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122
- ↑ 9.00 9.01 9.02 9.03 9.04 9.05 9.06 9.07 9.08 9.09 9.10 9.11 9.12 D. Lanman and D. Luebke, “Near-Eye Light Field Displays for Virtual and Augmented Reality,” ACM SIGGRAPH 2020 Courses, Association for Computing Machinery, 2020. Available: https://s2020.siggraph.org/program/?sess=near-eye-light-field-displays (accessed 30 Apr 2025).
- ↑ Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field
- ↑ 11.0 11.1 Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon & G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).
- ↑ 12.0 12.1 Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100
- ↑ 13.0 13.1 Adelson, E. H., & Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy & J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.
- ↑ Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96), 31-42. doi:10.1145/237170.237193
- ↑ Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., & Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH '05), 765-776. doi:10.1145/1186822.1073256
- ↑ Matusik, W., & Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH '04), 814–824. doi:10.1145/1186562.1015805
- ↑ 17.0 17.1 Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191
- ↑ 18.0 18.1 18.2 Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf
- ↑ 19.0 19.1 19.2 19.3 19.4 19.5 19.6 19.7 19.8 Lang, B. (2023, January 11). CREAL's Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth & Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/
- ↑ Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., & Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983
- ↑ 21.0 21.1 Japan Display Inc. News (2016, December 5). Ultra-High Resolution Display with Integrated Parallax Barrier for Glasses-Free 3D. Retrieved from https://www.j-display.com/english/news/2016/20161205.html
- ↑ 22.0 22.1 Light Field Lab. SolidLight Platform. Retrieved from https://www.lightfieldlab.com/solidlight
- ↑ Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624
- ↑ 24.0 24.1 Liu, S., Cheng, D., & Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403
- ↑ 25.0 25.1 M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.
- ↑ 26.0 26.1 26.2 26.3 26.4 26.5 26.6 26.7 C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).
- ↑ 27.0 27.1 Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943
- ↑ 28.0 28.1 28.2 28.3 28.4 Looking Glass Factory. Software Overview. Retrieved from https://lookingglassfactory.com/software
- ↑ 29.0 29.1 29.2 Light Field Lab Press Release (2021, October 7). Light Field Lab Demonstrates SolidLight™, the Highest Resolution Holographic Display Platform Ever Designed. Retrieved from https://www.lightfieldlab.com/press/light-field-lab-demonstrates-solidlight
- ↑ 30.0 30.1 Sony Professional. Sony's Spatial Reality Display. Retrieved from https://pro.sony/ue_US/products/professional-displays/elf-sr2
- ↑ 31.0 31.1 31.2 Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24
- ↑ 32.0 32.1 Nam, J., McCormick, M., & Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785
- ↑ Halle, M. W., & Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388
- ↑ 34.0 34.1 Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/
- ↑ PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html
- ↑ Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/
- ↑ Japan Display Inc. News (2019, December 3). JDI Develops World's First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html
- ↑ G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.