Jump to content

Light field: Difference between revisions

No edit summary
RealEditor (talk | contribs)
No edit summary
 
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{see also|Terms|Technical Terms}}
A '''light field''' (also spelled '''lightfield''') is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].<ref name="LevoyHanrahan1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. ''Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH '96'', 31-42.</ref><ref name="Gortler1996">Gortler, S. J., Grzeszczuk, R., Szeliski, R., & Cohen, M. F. (1996). The Lumigraph. ''Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH '96'', 43-54.</ref> Essentially, it's a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].<ref name="Ng2005">Ng, R. (2005). Digital Light Field Photography. ''Ph.D. Thesis, Stanford University''.</ref><ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ''ACM SIGGRAPH 2013 Talks'', 1-1.</ref>
A '''light field''' (also spelled '''lightfield''') is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].<ref name="LevoyHanrahan1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. ''Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH '96'', 31-42.</ref><ref name="Gortler1996">Gortler, S. J., Grzeszczuk, R., Szeliski, R., & Cohen, M. F. (1996). The Lumigraph. ''Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH '96'', 43-54.</ref> Essentially, it's a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].<ref name="Ng2005">Ng, R. (2005). Digital Light Field Photography. ''Ph.D. Thesis, Stanford University''.</ref><ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ''ACM SIGGRAPH 2013 Talks'', 1-1.</ref>


Line 13: Line 12:


===The Plenoptic Function===
===The Plenoptic Function===
The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).<ref name="AdelsonBergen1991"/> For many applications, this is overly complex and contains redundant information (for example light doesn't typically change along a straight ray in free space—radiance invariance—unless wavelength or time are critical).<ref name="WikiLF"/>
The most developed representation is the 7D plenoptic function which tracks a lightfield over time, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).<ref name="AdelsonBergen1991"/> For many applications, this is overly complex and contains redundant information (for example light doesn't typically change along a straight ray in free space, radiance invariance, unless wavelength or time are critical).<ref name="WikiLF"/>


===Simplified Light Fields===
===Simplified Light Fields===
Line 43: Line 42:
* '''Interpolation and View Synthesis:''' A key advantage is generating novel viewpoints not explicitly captured. This involves interpolating the 4D light field data to estimate the scene's appearance from arbitrary positions and angles.<ref name="Art1Kalantari">Kalantari, N. K., Wang, T. C., & Ramamoorthi, R. (2016). "Learning-based view synthesis for light field cameras." ACM Transactions on Graphics, 35(6), 193.</ref>
* '''Interpolation and View Synthesis:''' A key advantage is generating novel viewpoints not explicitly captured. This involves interpolating the 4D light field data to estimate the scene's appearance from arbitrary positions and angles.<ref name="Art1Kalantari">Kalantari, N. K., Wang, T. C., & Ramamoorthi, R. (2016). "Learning-based view synthesis for light field cameras." ACM Transactions on Graphics, 35(6), 193.</ref>
* '''[[Depth Estimation]]:''' The angular variation of light rays encodes depth information. Various algorithms can extract depth maps, valuable for effects like synthetic [[depth of field]] and for AR interactions.<ref name="Art1Tao">Tao, M. W., Hadap, S., Malik, J., & Ramamoorthi, R. (2013). "Depth from combining defocus and correspondence using light-field cameras." Proceedings of the IEEE International Conference on Computer Vision, 673-680.</ref>
* '''[[Depth Estimation]]:''' The angular variation of light rays encodes depth information. Various algorithms can extract depth maps, valuable for effects like synthetic [[depth of field]] and for AR interactions.<ref name="Art1Tao">Tao, M. W., Hadap, S., Malik, J., & Ramamoorthi, R. (2013). "Depth from combining defocus and correspondence using light-field cameras." Proceedings of the IEEE International Conference on Computer Vision, 673-680.</ref>
* '''Compression:''' Light field datasets are massive. Efficient compression is vital for storage and transmission, especially for mobile VR/AR and streaming. Techniques often adapt existing video codecs (like VP9) or use specialized approaches.<ref name="Art1Viola">Viola, I., Rerabek, M., & Ebrahimi, T. (2017). "Comparison and evaluation of light field image coding approaches." IEEE Journal of Selected Topics in Signal Processing, 11(7), 1092-1106.</ref><ref name="Art2AugPerc">[https://augmentedperception.github.io/welcome-to-lightfields/ Augmented Perception: Welcome to Light Fields]</ref> Standards bodies like JPEG Pleno and MPEG Immersive Video are developing formats for light field data.<ref name="Art4MMCommSoc">[https://mmc.committees.comsoc.org/files/2017/11/MMTC-Review-Letter-Vol-8-No-2-Nov-2017.pdf IEEE ComSoc MMTC Review Letter, Vol. 8, No. 2, Nov 2017]</ref> <!-- Rough citation combining refs from Art 4 -->
* '''Compression:''' Light field datasets are massive. Efficient compression is vital for storage and transmission, especially for mobile VR/AR and streaming. Techniques often adapt existing video codecs (like VP9) or use specialized approaches.<ref name="Art1Viola">Viola, I., Rerabek, M., & Ebrahimi, T. (2017). "Comparison and evaluation of light field image coding approaches." IEEE Journal of Selected Topics in Signal Processing, 11(7), 1092-1106.</ref><ref name="Art2AugPerc">[https://augmentedperception.github.io/welcome-to-lightfields/ Augmented Perception: Welcome to Light Fields]</ref> Standards bodies like JPEG Pleno and MPEG Immersive Video are developing formats for light field data.<ref name="Art4MMCommSoc">
IEEE Communications Society – Multimedia Communications Technical Committee.
''MMTC Communications – Review, Vol. 8 (No. 1), February 2017''.
PDF. https://mmc.committees.comsoc.org/files/2016/04/IEEE-ComSoc-MMTC-Comm-Review-Feb-2017-Issue.pdf  
(Accessed 3 May 2025).
</ref>


==Light Field Rendering and Display==
==Light Field Rendering and Display==
Line 65: Line 69:


====Holographic Displays====
====Holographic Displays====
[[Holography|Holographic]] displays reconstruct the light wavefront itself using [[spatial light modulator]]s (SLMs) to control the phase or amplitude of light. These can, in theory, perfectly reproduce the light field of a scene, offering continuous focus cues.<ref name="Art1Li">Li, G., Lee, D., Jeong, Y., Cho, J., & Lee, B. (2016). "Holographic display for see-through augmented reality using mirror-lens holographic optical element." Optics Letters, 41(11), 2486-2489.</ref> Research includes using [[Holographic Optical Elements (HOEs)]] and [[metasurface]]s for compact designs, like Nvidia's Holographic Glasses prototype.<ref name="Art4NvidiaDev">[https://developer.nvidia.com/blog/prescription-holographic-vr-glasses-research/ Nvidia Developer Blog: Holographic Glasses Research]</ref>
[[Holography|Holographic]] displays reconstruct the light wavefront itself using [[spatial light modulator]]s (SLMs) to control the phase or amplitude of light. These can, in theory, perfectly reproduce the light field of a scene, offering continuous focus cues.<ref name="Art1Li">Li, G., Lee, D., Jeong, Y., Cho, J., & Lee, B. (2016). "Holographic display for see-through augmented reality using mirror-lens holographic optical element." Optics Letters, 41(11), 2486-2489.</ref> Research includes using [[Holographic Optical Elements (HOEs)]] and [[metasurface]]s for compact designs, like Nvidia's Holographic Glasses prototype.<ref name="Art4NvidiaDev">
Kim, J. (2024). “Developing Smaller, Lighter Extended Reality Glasses Using AI.”
''NVIDIA Technical Blog'', 14 June 2024.
https://developer.nvidia.com/blog/developing-smaller-lighter-extended-reality-glasses-using-ai/  
(Accessed 3 May 2025).
</ref>


====Compressive/Tensor Displays====
====Compressive/Tensor Displays====
Line 79: Line 88:
* [[Magic Leap]]: Their [[spatial computing]] headsets incorporate light field principles to blend virtual and real content.<ref name="Art1Kress">Kress, B. C., & Chatterjee, I. (2020). "Waveguide combiners for mixed reality headsets: a nanophotonics design perspective." Nanophotonics, 9(11), 3653-3667.</ref><ref name="Art2FXGuide">[https://www.fxguide.com/fxfeatured/light-fields-the-future-of-vr-ar-mr/ fxguide: Light Fields - The Future of VR-AR-MR]</ref>
* [[Magic Leap]]: Their [[spatial computing]] headsets incorporate light field principles to blend virtual and real content.<ref name="Art1Kress">Kress, B. C., & Chatterjee, I. (2020). "Waveguide combiners for mixed reality headsets: a nanophotonics design perspective." Nanophotonics, 9(11), 3653-3667.</ref><ref name="Art2FXGuide">[https://www.fxguide.com/fxfeatured/light-fields-the-future-of-vr-ar-mr/ fxguide: Light Fields - The Future of VR-AR-MR]</ref>
* [[Leia Inc.]]: Creates light field displays for mobile devices.<ref name="Art1Fattal">Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., & Beausoleil, R. G. (2013). "A multi-directional backlight for a wide-angle, glasses-free three-dimensional display." Nature, 495(7441), 348-351.</ref>
* [[Leia Inc.]]: Creates light field displays for mobile devices.<ref name="Art1Fattal">Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., & Beausoleil, R. G. (2013). "A multi-directional backlight for a wide-angle, glasses-free three-dimensional display." Nature, 495(7441), 348-351.</ref>
* [[CREAL]]: Swiss startup developing near-eye light field displays specifically targeting the VAC issue in AR/VR.<ref name="Art2CrealRoad">[https://www.roadtovr.com/creal-light-field-display-new-immersion-ar/ Road to VR: Hands-on: CREAL's Light-field Display Brings a New Layer of Immersion to AR]</ref><ref name="Art2CrealSite">[https://creal.com/ CREAL: Light-field Display Technology]</ref>
* [[CREAL]]: Swiss startup developing near-eye light field displays specifically targeting the [[VAC]] issue in AR/VR.<ref name="Art2CrealRoad">[https://www.roadtovr.com/creal-light-field-display-new-immersion-ar/ Road to VR: Hands-on: CREAL's Light-field Display Brings a New Layer of Immersion to AR]</ref><ref name="Art2CrealSite">[https://creal.com/ CREAL: Light-field Display Technology]</ref>
* Light Field Lab: Developing large-scale holographic light field displays.
* Light Field Lab: Developing large-scale holographic light field displays.


Line 89: Line 98:
* '''Enhanced Visual Fidelity and View-Dependent Effects:''' Light fields capture and reproduce complex light interactions like specular [[highlight]]s, transparency, reflections, and refractions more accurately than traditional rendering, enhancing realism.<ref name="Art1Mildenhall"/>
* '''Enhanced Visual Fidelity and View-Dependent Effects:''' Light fields capture and reproduce complex light interactions like specular [[highlight]]s, transparency, reflections, and refractions more accurately than traditional rendering, enhancing realism.<ref name="Art1Mildenhall"/>
* '''Realistic Capture for VR/AR Content:''' Light field cameras capture real-world scenes with richer information than 360° video or basic [[photogrammetry]], preserving subtle lighting and allowing more natural exploration in VR. Systems like [[Google]]'s light field capture rigs and [[Lytro]] Immerge were developed for this.<ref name="Art2GoogleBlog"/>
* '''Realistic Capture for VR/AR Content:''' Light field cameras capture real-world scenes with richer information than 360° video or basic [[photogrammetry]], preserving subtle lighting and allowing more natural exploration in VR. Systems like [[Google]]'s light field capture rigs and [[Lytro]] Immerge were developed for this.<ref name="Art2GoogleBlog"/>
* '''[[Light Field Passthrough]] for Mixed Reality:''' An emerging technique for AR/[[Mixed Reality|MR]] headsets where specialized cameras capture the light field of the real world. This allows rendering the outside view with correct depth and perspective for the user's eyes, enabling seamless blending of virtual objects with reality and minimizing reprojection errors or distortions seen in traditional video passthrough. Meta's Flamera prototype is a notable example.<ref name="Art2TeknoAsian">[https://teknoasian.com/light-field-passthrough-the-bridge-between-reality-and-virtual-worlds/ Tekno Asian: Light Field Passthrough: The Bridge Between Reality and Virtual Worlds]</ref><ref name="Art4Spectrum">[https://spectrum.ieee.org/meta-flamera IEEE Spectrum: Meta Builds AR Headset With Unrivaled Passthrough]</ref><ref name="Art4DisplayDaily">[https://www.displaydaily.com/article/display-daily/metas-perspective-correct-passthrough-mr-display Display Daily: Meta’s Perspective-Correct Passthrough MR Display]</ref>
* '''[[Light Field Passthrough]] for Mixed Reality:''' An emerging technique for AR/[[Mixed Reality|MR]] headsets where specialized cameras capture the light field of the real world. This allows rendering the outside view with correct depth and perspective for the user's eyes, enabling seamless blending of virtual objects with reality and minimizing reprojection errors or distortions seen in traditional video passthrough. Meta's Flamera prototype is a notable example.<ref name="Art2TeknoAsian">[https://teknoasian.com/light-field-passthrough-the-bridge-between-reality-and-virtual-worlds/ Tekno Asian: Light Field Passthrough: The Bridge Between Reality and Virtual Worlds]</ref><ref name="Art4Spectrum">[https://spectrum.ieee.org/meta-flamera IEEE Spectrum: Meta Builds AR Headset With Unrivaled Passthrough]</ref><ref name="Art4DisplayDaily">
Blackwood, S. (2023). “Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies.”
''Display Daily'', 4 August 2023.
https://displaydaily.com/metas-going-to-siggraph-2023-and-showing-flamera-and-butterscotch-vr-technologies/
(Accessed 3 May 2025).
</ref>
* '''[[Telepresence]] and Remote Collaboration:''' Realistic capture and display of participants using light fields can significantly enhance the sense of presence in virtual meetings and remote collaboration systems, enabling more natural eye contact and spatial interaction.<ref name="Art1Orts">Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P. L., Khamis, S., Dou, M., Tankovich, V., Loop, C., Cai, Q., Chou, P. A., Mennicken, S., Valentin, J., Pradeep, V., Wang, S., Kang, S. B., Kohli, P., Lutchyn, Y., Keskin, C., & Izadi, S. (2016). "Holoportation: Virtual 3D teleportation in real-time." Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 741-754.</ref>
* '''[[Telepresence]] and Remote Collaboration:''' Realistic capture and display of participants using light fields can significantly enhance the sense of presence in virtual meetings and remote collaboration systems, enabling more natural eye contact and spatial interaction.<ref name="Art1Orts">Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P. L., Khamis, S., Dou, M., Tankovich, V., Loop, C., Cai, Q., Chou, P. A., Mennicken, S., Valentin, J., Pradeep, V., Wang, S., Kang, S. B., Kohli, P., Lutchyn, Y., Keskin, C., & Izadi, S. (2016). "Holoportation: Virtual 3D teleportation in real-time." Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 741-754.</ref>
* '''Post-Capture Refocus and DoF Control:''' While primarily a photographic benefit, this capability could be used in VR/AR for cinematic effects, accessibility features, or interactive storytelling.
* '''Post-Capture Refocus and DoF Control:''' While primarily a photographic benefit, this capability could be used in VR/AR for cinematic effects, accessibility features, or interactive storytelling.
Line 112: Line 126:


* '''Neural Radiance Fields (NeRF) and Neural Rendering:''' These [[machine learning]] techniques are rapidly evolving, offering efficient ways to represent and render complex scenes with view-dependent effects, potentially revolutionizing light field capture and synthesis for VR/AR.<ref name="Art1Mildenhall"/>
* '''Neural Radiance Fields (NeRF) and Neural Rendering:''' These [[machine learning]] techniques are rapidly evolving, offering efficient ways to represent and render complex scenes with view-dependent effects, potentially revolutionizing light field capture and synthesis for VR/AR.<ref name="Art1Mildenhall"/>
* '''Eye-Tracked Foveated Light Fields:''' Combining [[eye tracking]] with light field rendering/display allows concentrating detail and computational resources where the user is looking ([[foveated rendering]]), making real-time performance more feasible.<ref name="Art1Kaplanyan">Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., & Rufo, G. (2019). "DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos." ACM Transactions on Graphics, 38(6), 212.</ref><ref name="Art4NvidiaResearch">[https://research.nvidia.com/publication/2017-11_Foveated-Light-field-Rendering Nvidia Research: Foveated Light-field Rendering]</ref>
* '''Eye-Tracked Foveated Light Fields:''' Combining [[eye tracking]] with light field rendering/display allows concentrating detail and computational resources where the user is looking ([[foveated rendering]]), making real-time performance more feasible.<ref name="Art1Kaplanyan">Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., & Rufo, G. (2019). "DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos." ACM Transactions on Graphics, 38(6), 212.</ref>
<ref name="Art4NvidiaResearch">
Sun, Q., Huang, F.‑C., Kim, J., et al. (2017). “Perceptually‑Guided Foveation for Light‑Field Displays.”
''ACM SIGGRAPH Asia 2017 Technical Papers''. NVIDIA Research project page.
https://research.nvidia.com/publication/2017-11_perceptually-guided-foveation-light-field-displays
(Accessed 3 May 2025).
</ref>
* '''Compact Light Field Optics:''' Development of [[metalenses]], [[diffractive optics]], novel [[waveguide]] designs, and HOEs aims to create thinner, lighter, and more efficient optics for near-eye light field displays suitable for glasses-like AR/VR devices.<ref name="Art1WangOptics">Wang, N., Hua, H., & Viegas, D. (2021). "Compact optical see-through head-mounted display with varifocal liquid membrane lens." Digital Holography and Three-Dimensional Imaging 2021, OSA Technical Digest, DM3B.3.</ref><ref name="Art4NvidiaDev"/>
* '''Compact Light Field Optics:''' Development of [[metalenses]], [[diffractive optics]], novel [[waveguide]] designs, and HOEs aims to create thinner, lighter, and more efficient optics for near-eye light field displays suitable for glasses-like AR/VR devices.<ref name="Art1WangOptics">Wang, N., Hua, H., & Viegas, D. (2021). "Compact optical see-through head-mounted display with varifocal liquid membrane lens." Digital Holography and Three-Dimensional Imaging 2021, OSA Technical Digest, DM3B.3.</ref><ref name="Art4NvidiaDev"/>
* '''Light Field Video Streaming:''' Advances in compression and network bandwidth may enable real-time streaming of light field video for immersive communication, entertainment, and training.<ref name="Art1Gutierrez">Gutiérrez-Navarro, D., & Pérez-Daniel, K. R. (2022). "Light field video streaming: A review." IEEE Access, 10, 12345-12367.</ref>
* '''Light Field Video Streaming:''' Advances in compression and network bandwidth may enable real-time streaming of light field video for immersive communication, entertainment, and training.<ref name="Art1Gutierrez">Gutiérrez-Navarro, D., & Pérez-Daniel, K. R. (2022). "Light field video streaming: A review." IEEE Access, 10, 12345-12367.</ref>