Jump to content

Light field display: Difference between revisions

No edit summary
m Text replacement - "e.g.," to "for example"
Tags: Mobile edit Mobile web edit
 
Line 39: Line 39:
* '''1996:''' Levoy and Hanrahan publish work on Light Field Rendering.<ref name="Levoy1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96), 31-42. doi:10.1145/237170.237193</ref>
* '''1996:''' Levoy and Hanrahan publish work on Light Field Rendering.<ref name="Levoy1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96), 31-42. doi:10.1145/237170.237193</ref>
* '''2005:''' Stanford Multi-camera Array demonstrated for light field capture.<ref name="Wilburn2005">Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., & Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH '05), 765-776. doi:10.1145/1186822.1073256</ref>
* '''2005:''' Stanford Multi-camera Array demonstrated for light field capture.<ref name="Wilburn2005">Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., & Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH '05), 765-776. doi:10.1145/1186822.1073256</ref>
* '''2004-2008:''' Early computational light field displays developed (e.g., MIT Media Lab).<ref name="Matusik2004">Matusik, W., & Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH '04), 814–824. doi:10.1145/1186562.1015805</ref>
* '''2004-2008:''' Early computational light field displays developed (for example MIT Media Lab).<ref name="Matusik2004">Matusik, W., & Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH '04), 814–824. doi:10.1145/1186562.1015805</ref>
* '''2010-2013:''' Introduction of multilayer, compressive, and tensor light field display concepts.<ref name="Lanman2010ContentAdaptive">Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191</ref><ref name="WetzsteinTensor"/>
* '''2010-2013:''' Introduction of multilayer, compressive, and tensor light field display concepts.<ref name="Lanman2010ContentAdaptive">Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191</ref><ref name="WetzsteinTensor"/>
* '''2013:''' NVIDIA demonstrates near-eye light field display prototype for VR.<ref name="NvidiaNELD">Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf</ref>
* '''2013:''' NVIDIA demonstrates near-eye light field display prototype for VR.<ref name="NvidiaNELD">Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf</ref>
* '''2015 onwards:''' Emergence of commercial LFD products (e.g., Looking Glass Factory, Leia Inc.) and advanced prototypes (e.g., Sony, CREAL, Light Field Lab).<ref name="LookingGlass27"/><ref name="LeiaVerge"/><ref name="CrealRoadToVR">Lang, B. (2023, January 11). CREAL's Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth & Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/</ref>
* '''2015 onwards:''' Emergence of commercial LFD products (for example Looking Glass Factory, Leia Inc.) and advanced prototypes (for example Sony, CREAL, Light Field Lab).<ref name="LookingGlass27"/><ref name="LeiaVerge"/><ref name="CrealRoadToVR">Lang, B. (2023, January 11). CREAL's Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth & Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/</ref>


== Technical Implementations (How They Work) ==
== Technical Implementations (How They Work) ==
Line 49: Line 49:
* '''[[Microlens Arrays]] (MLAs):''' A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is a common approach derived from integral imaging.<ref name="Huang2014EyeglassesFree"/> The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.<ref name="Huang2014EyeglassesFree"/>
* '''[[Microlens Arrays]] (MLAs):''' A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.<ref name="Huang2014EyeglassesFree"/><ref name="Lanman2020NearEyeCourse"/> This is a common approach derived from integral imaging.<ref name="Huang2014EyeglassesFree"/> The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.<ref name="Huang2014EyeglassesFree"/>
* '''Multilayer Displays (Stacked LCDs):''' Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.<ref name="WetzsteinTensor"/><ref name="Lanman2010ContentAdaptive"/> These are often explored for near-eye displays.<ref name="Lanman2020NearEyeCourse"/>
* '''Multilayer Displays (Stacked LCDs):''' Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.<ref name="WetzsteinTensor"/><ref name="Lanman2010ContentAdaptive"/> These are often explored for near-eye displays.<ref name="Lanman2020NearEyeCourse"/>
* '''Directional Backlighting:''' A standard display panel (e.g., LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.<ref name="Maimone2013Focus3D">Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., & Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983</ref>
* '''Directional Backlighting:''' A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.<ref name="Maimone2013Focus3D">Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., & Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983</ref>
* '''Projector Arrays:''' Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.<ref name="LeiaVerge"/>
* '''Projector Arrays:''' Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.<ref name="LeiaVerge"/>
* '''[[Parallax Barrier]]s:''' An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.<ref name="JDI_Parallax">
* '''[[Parallax Barrier]]s:''' An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.<ref name="JDI_Parallax">
Line 124: Line 124:
* '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField"/><ref name="LookingGlassSoftware"/> Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.<ref name="ForbesLightField"/><ref name="Mildenhall2020NeRF">Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24</ref>
* '''[[Computer Graphics]] Rendering:''' Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.<ref name="ForbesLightField"/><ref name="LookingGlassSoftware"/> Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.<ref name="ForbesLightField"/><ref name="Mildenhall2020NeRF">Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24</ref>
* '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.
* '''[[Photogrammetry]] and 3D Scanning:''' Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.
* '''Existing 3D Content Conversion:''' Plugins and software tools (e.g., provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.<ref name="LookingGlassSoftware"/>
* '''Existing 3D Content Conversion:''' Plugins and software tools (for example provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.<ref name="LookingGlassSoftware"/>
* '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/>
* '''[[Focal Stack]] Conversion:''' Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.<ref name="WetzsteinTensor"/>


Line 158: Line 158:
Several companies and research groups are active in LFD development:
Several companies and research groups are active in LFD development:
* '''[[Looking Glass Factory]]:''' Leader in desktop/tabletop glasses-free LFDs (Looking Glass Portrait, 27", 65") for creators and enterprises.<ref name="LookingGlass27"/><ref name="LookingGlassSoftware"/>
* '''[[Looking Glass Factory]]:''' Leader in desktop/tabletop glasses-free LFDs (Looking Glass Portrait, 27", 65") for creators and enterprises.<ref name="LookingGlass27"/><ref name="LookingGlassSoftware"/>
* '''[[Leia Inc.]]:''' Develops LFD technology for mobile devices (e.g., Lume Pad), monitors, and automotive, often switchable between 2D and 3D LFD modes.<ref name="LeiaVerge"/> Acquired competitor Dimenco.
* '''[[Leia Inc.]]:''' Develops LFD technology for mobile devices (for example Lume Pad), monitors, and automotive, often switchable between 2D and 3D LFD modes.<ref name="LeiaVerge"/> Acquired competitor Dimenco.
* '''[[CREAL]]:''' Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.<ref name="CrealRoadToVR"/>
* '''[[CREAL]]:''' Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.<ref name="CrealRoadToVR"/>
* '''[[Light Field Lab]]:''' Developing large-scale, modular "holographic" LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.<ref name="LightFieldLabTech"/><ref name="LightFieldLabSolidLightPR"/>
* '''[[Light Field Lab]]:''' Developing large-scale, modular "holographic" LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.<ref name="LightFieldLabTech"/><ref name="LightFieldLabSolidLightPR"/>
Line 174: Line 174:
* '''Miniaturization for Wearables:''' Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.<ref name="CrealRoadToVR"/><ref name="SpringerReview2021"/>
* '''Miniaturization for Wearables:''' Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.<ref name="CrealRoadToVR"/><ref name="SpringerReview2021"/>
* '''Improved Content Capture and Creation Tools:''' Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.<ref name="Mildenhall2020NeRF"/>
* '''Improved Content Capture and Creation Tools:''' Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.<ref name="Mildenhall2020NeRF"/>
* '''Higher Resolution and Efficiency:''' Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (e.g., polarization multiplexing<ref name="Tan2019Polarization">G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.</ref>), and display technologies.
* '''Higher Resolution and Efficiency:''' Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing<ref name="Tan2019Polarization">G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.</ref>), and display technologies.


== See Also ==
== See Also ==