Light field display: Difference between revisions
Appearance
RealEditor (talk | contribs) No edit summary |
RealEditor (talk | contribs) Looking Glass Factory and Leia DO NOT and NEVER HAVE made REAL light field displays. It is false advertising. |
||
Line 1: | Line 1: | ||
'''Light field display''' ('''LFD''') is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic">Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).</ref> Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.<ref name="WetzsteinTensor">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576</ref> This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations). | '''Light field display''' ('''LFD''') is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic">Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).</ref> Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.<ref name="WetzsteinTensor">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576</ref> This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).<ref name="LeiaVerge">Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we've ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024</ref> | ||
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview">Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.</ref> By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).<ref name="CrealWebsite">CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/</ref> | This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview">Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.</ref> By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).<ref name="CrealWebsite">CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/</ref> | ||
Line 11: | Line 11: | ||
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].<ref name="WiredVAC"/> | * Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].<ref name="WiredVAC"/> | ||
* [[Specular highlights]] and realistic reflections that change with viewpoint. | * [[Specular highlights]] and realistic reflections that change with viewpoint. | ||
* | * Viewing without specialized eyewear (especially in non-headset formats). | ||
== Characteristics == | == Characteristics == | ||
* '''Glasses-Free 3D | * '''Glasses-Free 3D''' | ||
* '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement. | * '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement. | ||
* '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"> | * '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"> | ||
Lanman D., & Luebke D. (2013). “Near‑Eye Light Field Displays.” | Lanman D., & Luebke D. (2013). “Near‑Eye Light Field Displays.” | ||
Line 37: | Line 37: | ||
* '''2010-2013:''' Introduction of multilayer, compressive, and tensor light field display concepts.<ref name="Lanman2010ContentAdaptive">Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191</ref><ref name="WetzsteinTensor"/> | * '''2010-2013:''' Introduction of multilayer, compressive, and tensor light field display concepts.<ref name="Lanman2010ContentAdaptive">Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191</ref><ref name="WetzsteinTensor"/> | ||
* '''2013:''' NVIDIA demonstrates near-eye light field display prototype for VR.<ref name="NvidiaNELD">Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf</ref> | * '''2013:''' NVIDIA demonstrates near-eye light field display prototype for VR.<ref name="NvidiaNELD">Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf</ref> | ||
* '''2015 onwards:''' Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay). | * '''2015 onwards:''' Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).<ref name="LeiaVerge"/><ref name="CrealRoadToVR">Lang, B. (2023, January 11). CREAL's Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth & Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/</ref> | ||
== Technical Implementations (How They Work) == | == Technical Implementations (How They Work) == | ||
Line 119: | Line 119: | ||
* '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField"/><ref name="LookingGlassSoftware"/> Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.<ref name="ForbesLightField"/><ref name="Mildenhall2020NeRF">Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24</ref> | * '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField"/><ref name="LookingGlassSoftware"/> Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.<ref name="ForbesLightField"/><ref name="Mildenhall2020NeRF">Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24</ref> | ||
* '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views. | * '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views. | ||
* '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/> | * '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/> | ||
Line 133: | Line 132: | ||
* '''Medical Imaging and Visualization:''' Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.<ref name="Nam2019Medical">Nam, J., McCormick, M., & Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785</ref> | * '''Medical Imaging and Visualization:''' Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.<ref name="Nam2019Medical">Nam, J., McCormick, M., & Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785</ref> | ||
* '''Scientific Visualization:''' Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.<ref name="Halle2017SciVis">Halle, M. W., & Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388</ref> | * '''Scientific Visualization:''' Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.<ref name="Halle2017SciVis">Halle, M. W., & Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388</ref> | ||
* '''Product Design and Engineering (CAD/CAE):''' Collaborative visualization and review of 3D models.<ref name="Nam2019Medical"/> | * '''Product Design and Engineering (CAD/CAE):''' Collaborative visualization and review of 3D models.<ref name="Nam2019Medical"/> | ||
* '''Entertainment and Gaming:''' Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.<ref name="ForbesLightField"/> | * '''Entertainment and Gaming:''' Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.<ref name="ForbesLightField"/> | ||
* '''Telepresence and Communication:''' Creating realistic, life-sized 3D representations of remote collaborators, like Google's [[Project Starline]] concept.<ref name="Starline">Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/</ref> | * '''Telepresence and Communication:''' Creating realistic, life-sized 3D representations of remote collaborators, like Google's [[Project Starline]] concept.<ref name="Starline">Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/</ref> | ||
* '''[[Microscopy]]:''' Viewing microscopic samples with natural depth perception.<ref name="WetzsteinPlenoptic"/> | * '''[[Microscopy]]:''' Viewing microscopic samples with natural depth perception.<ref name="WetzsteinPlenoptic"/> |