Light field display: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) m Text replacement - "e.g.," to "for example" Tags: Mobile edit Mobile web edit |
||
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{see also|Terms|Technical Terms}} | {{see also|Terms|Technical Terms}} | ||
'''Light Field Display''' (LFD) is an advanced visualization technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic"> | '''Light Field Display''' ('''LFD''') is an advanced visualization technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.<ref name="WetzsteinPlenoptic"> | ||
Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” | |||
ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. | |||
Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025). | |||
</ref> Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.<ref name="WetzsteinTensor">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576</ref> This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).<ref name="LookingGlass27">Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27</ref><ref name="LeiaVerge">Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we've ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024</ref> | |||
This technology is considered crucial for the future of [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) because it can directly address the [[Vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview">Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.</ref> By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current HMDs.<ref name="CrealWebsite">CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/</ref> | This technology is considered crucial for the future of [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) because it can directly address the [[Vergence-accommodation conflict]] (VAC).<ref name="WiredVAC">Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr</ref><ref name="VACReview">Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.</ref> By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current HMDs.<ref name="CrealWebsite">CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/</ref> | ||
Line 17: | Line 21: | ||
* '''Glasses-Free 3D:''' Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> | * '''Glasses-Free 3D:''' Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> | ||
* '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.<ref name="LeiaVerge"/> | * '''Full Parallax:''' True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.<ref name="LeiaVerge"/> | ||
* '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"> | * '''Accommodation-Convergence Conflict Resolution:''' A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"> | ||
Lanman D., & Luebke D. (2013). “Near‑Eye Light Field Displays.” | |||
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366. | |||
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025). | |||
</ref> | |||
* '''Computational Requirements:''' Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.<ref name="LeiaVerge"/><ref name="Huang2014EyeglassesFree"/> | * '''Computational Requirements:''' Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.<ref name="LeiaVerge"/><ref name="Huang2014EyeglassesFree"/> | ||
* '''Resolution Trade-offs:''' A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is often referred to as the spatio-angular resolution trade-off. | * '''Resolution Trade-offs:''' A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is often referred to as the spatio-angular resolution trade-off. | ||
Line 31: | Line 39: | ||
* '''1996:''' Levoy and Hanrahan publish work on Light Field Rendering.<ref name="Levoy1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96), 31-42. doi:10.1145/237170.237193</ref> | * '''1996:''' Levoy and Hanrahan publish work on Light Field Rendering.<ref name="Levoy1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96), 31-42. doi:10.1145/237170.237193</ref> | ||
* '''2005:''' Stanford Multi-camera Array demonstrated for light field capture.<ref name="Wilburn2005">Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., & Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH '05), 765-776. doi:10.1145/1186822.1073256</ref> | * '''2005:''' Stanford Multi-camera Array demonstrated for light field capture.<ref name="Wilburn2005">Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., & Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH '05), 765-776. doi:10.1145/1186822.1073256</ref> | ||
* '''2004-2008:''' Early computational light field displays developed ( | * '''2004-2008:''' Early computational light field displays developed (for example MIT Media Lab).<ref name="Matusik2004">Matusik, W., & Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH '04), 814–824. doi:10.1145/1186562.1015805</ref> | ||
* '''2010-2013:''' Introduction of multilayer, compressive, and tensor light field display concepts.<ref name="Lanman2010ContentAdaptive">Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191</ref><ref name="WetzsteinTensor"/> | * '''2010-2013:''' Introduction of multilayer, compressive, and tensor light field display concepts.<ref name="Lanman2010ContentAdaptive">Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191</ref><ref name="WetzsteinTensor"/> | ||
* '''2013:''' NVIDIA demonstrates near-eye light field display prototype for VR.<ref name="NvidiaNELD">Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf</ref> | * '''2013:''' NVIDIA demonstrates near-eye light field display prototype for VR.<ref name="NvidiaNELD">Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf</ref> | ||
* '''2015 onwards:''' Emergence of commercial LFD products ( | * '''2015 onwards:''' Emergence of commercial LFD products (for example Looking Glass Factory, Leia Inc.) and advanced prototypes (for example Sony, CREAL, Light Field Lab).<ref name="LookingGlass27"/><ref name="LeiaVerge"/><ref name="CrealRoadToVR">Lang, B. (2023, January 11). CREAL's Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth & Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/</ref> | ||
== Technical Implementations (How They Work) == | == Technical Implementations (How They Work) == | ||
Line 41: | Line 49: | ||
* '''[[Microlens Arrays]] (MLAs):''' A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is a common approach derived from integral imaging.<ref name="Huang2014EyeglassesFree"/> The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.<ref name="Huang2014EyeglassesFree"/> | * '''[[Microlens Arrays]] (MLAs):''' A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is a common approach derived from integral imaging.<ref name="Huang2014EyeglassesFree"/> The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.<ref name="Huang2014EyeglassesFree"/> | ||
* '''Multilayer Displays (Stacked LCDs):''' Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.<ref name="WetzsteinTensor"/><ref name="Lanman2010ContentAdaptive"/> These are often explored for near-eye displays.<ref name="Lanman2020NearEyeCourse"/> | * '''Multilayer Displays (Stacked LCDs):''' Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.<ref name="WetzsteinTensor"/><ref name="Lanman2010ContentAdaptive"/> These are often explored for near-eye displays.<ref name="Lanman2020NearEyeCourse"/> | ||
* '''Directional Backlighting:''' A standard display panel ( | * '''Directional Backlighting:''' A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.<ref name="Maimone2013Focus3D">Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., & Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983</ref> | ||
* '''Projector Arrays:''' Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.<ref name="LeiaVerge"/> | * '''Projector Arrays:''' Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.<ref name="LeiaVerge"/> | ||
* '''[[Parallax Barrier]]s:''' An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.<ref name="JDI_Parallax"> | * '''[[Parallax Barrier]]s:''' An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.<ref name="JDI_Parallax"> | ||
* '''[[Waveguide]] Optics:''' Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.<ref name="LightFieldLabTech"> | Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release]. | ||
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025). | |||
</ref> Often less light-efficient than MLAs. | |||
* '''[[Waveguide]] Optics:''' Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.<ref name="LightFieldLabTech"> | |||
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025). | |||
</ref><ref name="Maimone2017HolographicNED">Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624</ref> This is explored for compact AR/VR systems. | |||
* '''Time-Multiplexed Displays:''' Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.<ref name="Liu2014OSTHMD">Liu, S., Cheng, D., & Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403</ref> | * '''Time-Multiplexed Displays:''' Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.<ref name="Liu2014OSTHMD">Liu, S., Cheng, D., & Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403</ref> | ||
* '''Holographic and Diffractive Approaches:''' While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.<ref name="SpringerReview2021">M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.</ref> Some companies use "holographic" terminology for their high-density LFDs.<ref name="ForbesLightField">C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).</ref> | * '''Holographic and Diffractive Approaches:''' While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.<ref name="SpringerReview2021">M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.</ref> Some companies use "holographic" terminology for their high-density LFDs.<ref name="ForbesLightField">C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).</ref> | ||
Line 51: | Line 64: | ||
* '''Near-Eye Light Field Displays:''' Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"/> Examples include research prototypes from NVIDIA<ref name="NvidiaNELD"/> and academic groups,<ref name="Huang2015Stereoscope">Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943</ref> and commercial modules from companies like [[CREAL]].<ref name="CrealRoadToVR"/> Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | * '''Near-Eye Light Field Displays:''' Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.<ref name="CrealWebsite"/><ref name="Lanman2020NearEyeCourse"/> Examples include research prototypes from NVIDIA<ref name="NvidiaNELD"/> and academic groups,<ref name="Huang2015Stereoscope">Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943</ref> and commercial modules from companies like [[CREAL]].<ref name="CrealRoadToVR"/> Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.<ref name="Lanman2020NearEyeCourse"/><ref name="CrealRoadToVR"/> | ||
* '''Tabletop/Desktop Displays:''' Provide glasses-free 3D for individual or small group viewing. Used for professional visualization, gaming, communication, and content creation.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> [[Looking Glass Factory]] is a key player here, offering various sizes like the Looking Glass Portrait and the larger Looking Glass 27".<ref name="LookingGlass27"/><ref name="LookingGlassSoftware">Looking Glass Factory. Software Overview. Retrieved from https://lookingglassfactory.com/software</ref> [[Leia Inc.]] also targets this market with monitor and mobile displays.<ref name="LeiaVerge"/> Typically use MLA or barrier technology. | * '''Tabletop/Desktop Displays:''' Provide glasses-free 3D for individual or small group viewing. Used for professional visualization, gaming, communication, and content creation.<ref name="LookingGlass27"/><ref name="LeiaVerge"/> [[Looking Glass Factory]] is a key player here, offering various sizes like the Looking Glass Portrait and the larger Looking Glass 27".<ref name="LookingGlass27"/><ref name="LookingGlassSoftware">Looking Glass Factory. Software Overview. Retrieved from https://lookingglassfactory.com/software</ref> [[Leia Inc.]] also targets this market with monitor and mobile displays.<ref name="LeiaVerge"/> Typically use MLA or barrier technology. | ||
* '''Large Format / Tiled Displays:''' Aimed at creating large-scale, immersive "holographic" experiences without glasses for public venues, command centers, or collaborative environments.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR"> | * '''Large Format / Tiled Displays:''' Aimed at creating large-scale, immersive "holographic" experiences without glasses for public venues, command centers, or collaborative environments.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR"> | ||
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.* | |||
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025). | |||
</ref> [[Light Field Lab]]'s SolidLight™ platform uses modular panels designed to be tiled into large video walls.<ref name="ForbesLightField"/><ref name="LightFieldLabSolidLightPR"/> Sony's ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.<ref name="SonyELFSR2"> | |||
Sony Professional. *ELF‑SR2 Spatial Reality Display.* | |||
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025). | |||
</ref> | |||
== Comparison with Other 3D Display Technologies == | == Comparison with Other 3D Display Technologies == | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 106: | Line 124: | ||
* '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField"/><ref name="LookingGlassSoftware"/> Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.<ref name="ForbesLightField"/><ref name="Mildenhall2020NeRF">Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24</ref> | * '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField"/><ref name="LookingGlassSoftware"/> Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.<ref name="ForbesLightField"/><ref name="Mildenhall2020NeRF">Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24</ref> | ||
* '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views. | * '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views. | ||
* '''Existing 3D Content Conversion:''' Plugins and software tools ( | * '''Existing 3D Content Conversion:''' Plugins and software tools (for example provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.<ref name="LookingGlassSoftware"/> | ||
* '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/> | * '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/> | ||
Line 140: | Line 158: | ||
Several companies and research groups are active in LFD development: | Several companies and research groups are active in LFD development: | ||
* '''[[Looking Glass Factory]]:''' Leader in desktop/tabletop glasses-free LFDs (Looking Glass Portrait, 27", 65") for creators and enterprises.<ref name="LookingGlass27"/><ref name="LookingGlassSoftware"/> | * '''[[Looking Glass Factory]]:''' Leader in desktop/tabletop glasses-free LFDs (Looking Glass Portrait, 27", 65") for creators and enterprises.<ref name="LookingGlass27"/><ref name="LookingGlassSoftware"/> | ||
* '''[[Leia Inc.]]:''' Develops LFD technology for mobile devices ( | * '''[[Leia Inc.]]:''' Develops LFD technology for mobile devices (for example Lume Pad), monitors, and automotive, often switchable between 2D and 3D LFD modes.<ref name="LeiaVerge"/> Acquired competitor Dimenco. | ||
* '''[[CREAL]]:''' Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.<ref name="CrealRoadToVR"/> | * '''[[CREAL]]:''' Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.<ref name="CrealRoadToVR"/> | ||
* '''[[Light Field Lab]]:''' Developing large-scale, modular "holographic" LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.<ref name="LightFieldLabTech"/><ref name="LightFieldLabSolidLightPR"/> | * '''[[Light Field Lab]]:''' Developing large-scale, modular "holographic" LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.<ref name="LightFieldLabTech"/><ref name="LightFieldLabSolidLightPR"/> | ||
Line 156: | Line 174: | ||
* '''Miniaturization for Wearables:''' Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.<ref name="CrealRoadToVR"/><ref name="SpringerReview2021"/> | * '''Miniaturization for Wearables:''' Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.<ref name="CrealRoadToVR"/><ref name="SpringerReview2021"/> | ||
* '''Improved Content Capture and Creation Tools:''' Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.<ref name="Mildenhall2020NeRF"/> | * '''Improved Content Capture and Creation Tools:''' Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.<ref name="Mildenhall2020NeRF"/> | ||
* '''Higher Resolution and Efficiency:''' Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs ( | * '''Higher Resolution and Efficiency:''' Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing<ref name="Tan2019Polarization">G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.</ref>), and display technologies. | ||
== See Also == | == See Also == |