Light field display: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
RealEditor (talk | contribs) No edit summary |
||
(20 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
'''Light field display''' ('''LFD''') is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic">Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).</ref> Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.<ref name="WetzsteinTensor">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576</ref> This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).<ref name="LeiaVerge">Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we've ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024</ref> | |||
'''Light | |||
Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” | |||
ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. | |||
Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025). | |||
</ref> Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.<ref name="WetzsteinTensor">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576</ref> This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations). | |||
This | This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it solves the [[vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview">Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.</ref> It provides correct [[focal cues]] that match the [[vergence]] information, giving a more realistic 3D image that is more visually comfortable, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).<ref name="CrealWebsite">CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/</ref> | ||
== Definition and Principles == | == Definition and Principles == | ||
Line 16: | Line 11: | ||
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].<ref name="WiredVAC"/> | * Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].<ref name="WiredVAC"/> | ||
* [[Specular highlights]] and realistic reflections that change with viewpoint. | * [[Specular highlights]] and realistic reflections that change with viewpoint. | ||
* | * Viewing without specialized eyewear (especially in non-headset formats). | ||
== | == Characteristics == | ||
* '''Glasses-Free 3D | * '''Glasses-Free 3D''' | ||
* '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement. | * '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement. | ||
* '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"> | * '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"> | ||
Lanman D., & Luebke D. (2013). “Near‑Eye Light Field Displays.” | Lanman D., & Luebke D. (2013). “Near‑Eye Light Field Displays.” | ||
Line 39: | Line 34: | ||
* '''1996:''' Levoy and Hanrahan publish work on Light Field Rendering.<ref name="Levoy1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96), 31-42. doi:10.1145/237170.237193</ref> | * '''1996:''' Levoy and Hanrahan publish work on Light Field Rendering.<ref name="Levoy1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96), 31-42. doi:10.1145/237170.237193</ref> | ||
* '''2005:''' Stanford Multi-camera Array demonstrated for light field capture.<ref name="Wilburn2005">Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., & Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH '05), 765-776. doi:10.1145/1186822.1073256</ref> | * '''2005:''' Stanford Multi-camera Array demonstrated for light field capture.<ref name="Wilburn2005">Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., & Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH '05), 765-776. doi:10.1145/1186822.1073256</ref> | ||
* '''2004-2008:''' Early computational light field displays developed ( | * '''2004-2008:''' Early computational light field displays developed (for example MIT Media Lab).<ref name="Matusik2004">Matusik, W., & Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH '04), 814–824. doi:10.1145/1186562.1015805</ref> | ||
* '''2010-2013:''' Introduction of multilayer, compressive, and tensor light field display concepts.<ref name="Lanman2010ContentAdaptive">Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191</ref><ref name="WetzsteinTensor"/> | * '''2010-2013:''' Introduction of multilayer, compressive, and tensor light field display concepts.<ref name="Lanman2010ContentAdaptive">Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191</ref><ref name="WetzsteinTensor"/> | ||
* '''2013:''' NVIDIA demonstrates near-eye light field display prototype for VR.<ref name="NvidiaNELD">Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf</ref> | * '''2013:''' NVIDIA demonstrates near-eye light field display prototype for VR.<ref name="NvidiaNELD">Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf</ref> | ||
* '''2015 onwards:''' Emergence of | * '''2015 onwards:''' Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).<ref name="LeiaVerge"/><ref name="CrealRoadToVR">Lang, B. (2023, January 11). CREAL's Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth & Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/</ref> | ||
== Technical Implementations (How They Work) == | == Technical Implementations (How They Work) == | ||
Line 49: | Line 44: | ||
* '''[[Microlens Arrays]] (MLAs):''' A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is a common approach derived from integral imaging.<ref name="Huang2014EyeglassesFree"/> The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.<ref name="Huang2014EyeglassesFree"/> | * '''[[Microlens Arrays]] (MLAs):''' A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is a common approach derived from integral imaging.<ref name="Huang2014EyeglassesFree"/> The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.<ref name="Huang2014EyeglassesFree"/> | ||
* '''Multilayer Displays (Stacked LCDs):''' Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.<ref name="WetzsteinTensor"/><ref name="Lanman2010ContentAdaptive"/> These are often explored for near-eye displays.<ref name="Lanman2020NearEyeCourse"/> | * '''Multilayer Displays (Stacked LCDs):''' Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.<ref name="WetzsteinTensor"/><ref name="Lanman2010ContentAdaptive"/> These are often explored for near-eye displays.<ref name="Lanman2020NearEyeCourse"/> | ||
* '''Directional Backlighting:''' A standard display panel ( | * '''Directional Backlighting:''' A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.<ref name="Maimone2013Focus3D">Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., & Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983</ref> | ||
* '''Projector Arrays:''' Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.<ref name="LeiaVerge"/> | * '''Projector Arrays:''' Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.<ref name="LeiaVerge"/> | ||
* '''[[Parallax Barrier]]s:''' An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.<ref name="JDI_Parallax"> | * '''[[Parallax Barrier]]s:''' An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.<ref name="JDI_Parallax"> | ||
Line 58: | Line 53: | ||
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025). | Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025). | ||
</ref><ref name="Maimone2017HolographicNED">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624</ref> This is explored for compact AR/VR systems. | </ref><ref name="Maimone2017HolographicNED">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624</ref> This is explored for compact AR/VR systems. | ||
* '''Time-Multiplexed Displays:''' Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.<ref name="Liu2014OSTHMD">Liu, S., Cheng, D., & Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403</ref> | * '''Time-Multiplexed Displays:''' This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.<ref name="Liu2014OSTHMD">Liu, S., Cheng, D., & Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403</ref> | ||
* '''Holographic and Diffractive Approaches:''' While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.<ref name="SpringerReview2021">M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.</ref> Some companies use "holographic" terminology for their high-density LFDs.<ref name="ForbesLightField">C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).</ref> | * '''Holographic and Diffractive Approaches:''' While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.<ref name="SpringerReview2021">M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.</ref> Some companies use "holographic" terminology for their high-density LFDs.<ref name="ForbesLightField">C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).</ref> | ||
== Types of Light Field Displays == | == Types of Light Field Displays == | ||
* '''Near-Eye Light Field Displays:''' Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"/> Examples include research prototypes from NVIDIA<ref name="NvidiaNELD"/> and academic groups,<ref name="Huang2015Stereoscope">Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943</ref> and commercial modules from companies like [[CREAL]].<ref name="CrealRoadToVR"/> Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | * '''Near-Eye Light Field Displays:''' Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"/> Examples include research prototypes from NVIDIA<ref name="NvidiaNELD"/> and academic groups,<ref name="Huang2015Stereoscope">Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943</ref> and commercial modules from companies like [[CREAL]].<ref name="CrealRoadToVR"/> Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | ||
* '''Large Format / Tiled Displays:''' Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR"> | |||
* '''Large Format / Tiled Displays:''' Aimed at creating large-scale, immersive | |||
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.* | Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.* | ||
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025). | https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025). | ||
Line 71: | Line 65: | ||
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025). | https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025). | ||
</ref> | </ref> | ||
== Comparison with Other 3D Display Technologies == | == Comparison with Other 3D Display Technologies == | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 81: | Line 76: | ||
! Key Trade-offs | ! Key Trade-offs | ||
|- | |- | ||
| ''' | | '''Light Field Display''' | ||
| No (often) | | No (often) | ||
| Yes | | Yes | ||
| Yes | | Yes | ||
| | | | ||
| Spatio-angular resolution trade-off, computation needs | | Spatio-angular resolution trade-off, computation needs | ||
|- | |- | ||
Line 122: | Line 117: | ||
Creating content compatible with LFDs requires capturing or generating directional view information: | Creating content compatible with LFDs requires capturing or generating directional view information: | ||
* '''[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:''' Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).<ref name="WetzsteinPlenoptic"/> The captured data can be processed for LFD playback. | * '''[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:''' Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).<ref name="WetzsteinPlenoptic"/> The captured data can be processed for LFD playback. | ||
* '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField | * '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField"/> Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.<ref name="ForbesLightField"/><ref name="Mildenhall2020NeRF">Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24</ref> | ||
* '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views. | * '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views. | ||
* '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/> | * '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/> | ||
Line 138: | Line 132: | ||
* '''Medical Imaging and Visualization:''' Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.<ref name="Nam2019Medical">Nam, J., McCormick, M., & Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785</ref> | * '''Medical Imaging and Visualization:''' Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.<ref name="Nam2019Medical">Nam, J., McCormick, M., & Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785</ref> | ||
* '''Scientific Visualization:''' Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.<ref name="Halle2017SciVis">Halle, M. W., & Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388</ref> | * '''Scientific Visualization:''' Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.<ref name="Halle2017SciVis">Halle, M. W., & Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388</ref> | ||
* '''Product Design and Engineering (CAD/CAE):''' Collaborative visualization and review of 3D models.<ref name="Nam2019Medical"/> | * '''Product Design and Engineering (CAD/CAE):''' Collaborative visualization and review of 3D models.<ref name="Nam2019Medical"/> | ||
* '''Entertainment and Gaming:''' Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.<ref name="ForbesLightField"/> | * '''Entertainment and Gaming:''' Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.<ref name="ForbesLightField"/> | ||
* '''Telepresence and Communication:''' Creating realistic, life-sized 3D representations of remote collaborators, like Google's [[Project Starline]] concept.<ref name="Starline">Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/</ref> | * '''Telepresence and Communication:''' Creating realistic, life-sized 3D representations of remote collaborators, like Google's [[Project Starline]] concept.<ref name="Starline">Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/</ref> | ||
* '''[[Microscopy]]:''' Viewing microscopic samples with natural depth perception.<ref name="WetzsteinPlenoptic"/> | * '''[[Microscopy]]:''' Viewing microscopic samples with natural depth perception.<ref name="WetzsteinPlenoptic"/> | ||
Line 151: | Line 143: | ||
* '''Form Factor and Miniaturization:''' Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | * '''Form Factor and Miniaturization:''' Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | ||
* '''Limited Field of View (FoV):''' Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.<ref name="Lanman2020NearEyeCourse"/> | * '''Limited Field of View (FoV):''' Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.<ref name="Lanman2020NearEyeCourse"/> | ||
* '''Content Ecosystem:''' The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware. | |||
* '''Content Ecosystem:''' The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D | |||
== Key Players and Commercial Landscape == | == Key Players and Commercial Landscape == | ||
Several companies and research groups are active in LFD development: | Several companies and research groups are active in LFD development: | ||
* '''[[CREAL]]:''' Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.<ref name="CrealRoadToVR"/> | * '''[[CREAL]]:''' Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.<ref name="CrealRoadToVR"/> | ||
* '''[[Light Field Lab]]:''' Developing large-scale, modular | * '''[[Light Field Lab]]:''' Developing large-scale, modular LFD panels (branded as SolidLight) based on [[Waveguide (optics)|waveguide]] technology.<ref name="LightFieldLabTech"/><ref name="LightFieldLabSolidLightPR"/> | ||
* '''[[Sony]]:''' Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.<ref name="SonyELFSR2"/> | * '''[[Sony]]:''' Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.<ref name="SonyELFSR2"/> | ||
* '''[[Avegant]]:''' Develops light field light engines, particularly for AR, focusing on VAC resolution.<ref name="AvegantPR">PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html</ref> | * '''[[Avegant]]:''' Develops light field light engines, particularly for AR, focusing on VAC resolution.<ref name="AvegantPR">PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html</ref> | ||
Line 174: | Line 162: | ||
* '''Miniaturization for Wearables:''' Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.<ref name="CrealRoadToVR"/><ref name="SpringerReview2021"/> | * '''Miniaturization for Wearables:''' Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.<ref name="CrealRoadToVR"/><ref name="SpringerReview2021"/> | ||
* '''Improved Content Capture and Creation Tools:''' Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.<ref name="Mildenhall2020NeRF"/> | * '''Improved Content Capture and Creation Tools:''' Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.<ref name="Mildenhall2020NeRF"/> | ||
* '''Higher Resolution and Efficiency:''' Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs ( | * '''Higher Resolution and Efficiency:''' Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing<ref name="Tan2019Polarization">G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.</ref>), and display technologies. | ||
== See Also == | == See Also == |