Jump to content

Light field: Difference between revisions

No edit summary
No edit summary
Line 43: Line 43:
* '''Interpolation and View Synthesis:''' A key advantage is generating novel viewpoints not explicitly captured. This involves interpolating the 4D light field data to estimate the scene's appearance from arbitrary positions and angles.<ref name="Art1Kalantari">Kalantari, N. K., Wang, T. C., & Ramamoorthi, R. (2016). "Learning-based view synthesis for light field cameras." ACM Transactions on Graphics, 35(6), 193.</ref>
* '''Interpolation and View Synthesis:''' A key advantage is generating novel viewpoints not explicitly captured. This involves interpolating the 4D light field data to estimate the scene's appearance from arbitrary positions and angles.<ref name="Art1Kalantari">Kalantari, N. K., Wang, T. C., & Ramamoorthi, R. (2016). "Learning-based view synthesis for light field cameras." ACM Transactions on Graphics, 35(6), 193.</ref>
* '''[[Depth Estimation]]:''' The angular variation of light rays encodes depth information. Various algorithms can extract depth maps, valuable for effects like synthetic [[depth of field]] and for AR interactions.<ref name="Art1Tao">Tao, M. W., Hadap, S., Malik, J., & Ramamoorthi, R. (2013). "Depth from combining defocus and correspondence using light-field cameras." Proceedings of the IEEE International Conference on Computer Vision, 673-680.</ref>
* '''[[Depth Estimation]]:''' The angular variation of light rays encodes depth information. Various algorithms can extract depth maps, valuable for effects like synthetic [[depth of field]] and for AR interactions.<ref name="Art1Tao">Tao, M. W., Hadap, S., Malik, J., & Ramamoorthi, R. (2013). "Depth from combining defocus and correspondence using light-field cameras." Proceedings of the IEEE International Conference on Computer Vision, 673-680.</ref>
* '''Compression:''' Light field datasets are massive. Efficient compression is vital for storage and transmission, especially for mobile VR/AR and streaming. Techniques often adapt existing video codecs (like VP9) or use specialized approaches.<ref name="Art1Viola">Viola, I., Rerabek, M., & Ebrahimi, T. (2017). "Comparison and evaluation of light field image coding approaches." IEEE Journal of Selected Topics in Signal Processing, 11(7), 1092-1106.</ref><ref name="Art2AugPerc">[https://augmentedperception.github.io/welcome-to-lightfields/ Augmented Perception: Welcome to Light Fields]</ref> Standards bodies like JPEG Pleno and MPEG Immersive Video are developing formats for light field data.<ref name="Art4MMCommSoc">[https://mmc.committees.comsoc.org/files/2017/11/MMTC-Review-Letter-Vol-8-No-2-Nov-2017.pdf IEEE ComSoc MMTC Review Letter, Vol. 8, No. 2, Nov 2017]</ref> <!-- Rough citation combining refs from Art 4 -->
* '''Compression:''' Light field datasets are massive. Efficient compression is vital for storage and transmission, especially for mobile VR/AR and streaming. Techniques often adapt existing video codecs (like VP9) or use specialized approaches.<ref name="Art1Viola">Viola, I., Rerabek, M., & Ebrahimi, T. (2017). "Comparison and evaluation of light field image coding approaches." IEEE Journal of Selected Topics in Signal Processing, 11(7), 1092-1106.</ref><ref name="Art2AugPerc">[https://augmentedperception.github.io/welcome-to-lightfields/ Augmented Perception: Welcome to Light Fields]</ref> Standards bodies like JPEG Pleno and MPEG Immersive Video are developing formats for light field data.<ref name="Art4MMCommSoc">
IEEE Communications Society – Multimedia Communications Technical Committee.
''MMTC Communications – Review, Vol. 8 (No. 1), February 2017''.
PDF. https://mmc.committees.comsoc.org/files/2016/04/IEEE-ComSoc-MMTC-Comm-Review-Feb-2017-Issue.pdf  
(Accessed 3 May 2025).
</ref>


==Light Field Rendering and Display==
==Light Field Rendering and Display==
Line 65: Line 70:


====Holographic Displays====
====Holographic Displays====
[[Holography|Holographic]] displays reconstruct the light wavefront itself using [[spatial light modulator]]s (SLMs) to control the phase or amplitude of light. These can, in theory, perfectly reproduce the light field of a scene, offering continuous focus cues.<ref name="Art1Li">Li, G., Lee, D., Jeong, Y., Cho, J., & Lee, B. (2016). "Holographic display for see-through augmented reality using mirror-lens holographic optical element." Optics Letters, 41(11), 2486-2489.</ref> Research includes using [[Holographic Optical Elements (HOEs)]] and [[metasurface]]s for compact designs, like Nvidia's Holographic Glasses prototype.<ref name="Art4NvidiaDev">[https://developer.nvidia.com/blog/prescription-holographic-vr-glasses-research/ Nvidia Developer Blog: Holographic Glasses Research]</ref>
[[Holography|Holographic]] displays reconstruct the light wavefront itself using [[spatial light modulator]]s (SLMs) to control the phase or amplitude of light. These can, in theory, perfectly reproduce the light field of a scene, offering continuous focus cues.<ref name="Art1Li">Li, G., Lee, D., Jeong, Y., Cho, J., & Lee, B. (2016). "Holographic display for see-through augmented reality using mirror-lens holographic optical element." Optics Letters, 41(11), 2486-2489.</ref> Research includes using [[Holographic Optical Elements (HOEs)]] and [[metasurface]]s for compact designs, like Nvidia's Holographic Glasses prototype.<ref name="Art4NvidiaDev">
Kim, J. (2024). “Developing Smaller, Lighter Extended Reality Glasses Using AI.”
''NVIDIA Technical Blog'', 14 June 2024.
https://developer.nvidia.com/blog/developing-smaller-lighter-extended-reality-glasses-using-ai/  
(Accessed 3 May 2025).
</ref>


====Compressive/Tensor Displays====
====Compressive/Tensor Displays====
Line 89: Line 99:
* '''Enhanced Visual Fidelity and View-Dependent Effects:''' Light fields capture and reproduce complex light interactions like specular [[highlight]]s, transparency, reflections, and refractions more accurately than traditional rendering, enhancing realism.<ref name="Art1Mildenhall"/>
* '''Enhanced Visual Fidelity and View-Dependent Effects:''' Light fields capture and reproduce complex light interactions like specular [[highlight]]s, transparency, reflections, and refractions more accurately than traditional rendering, enhancing realism.<ref name="Art1Mildenhall"/>
* '''Realistic Capture for VR/AR Content:''' Light field cameras capture real-world scenes with richer information than 360° video or basic [[photogrammetry]], preserving subtle lighting and allowing more natural exploration in VR. Systems like [[Google]]'s light field capture rigs and [[Lytro]] Immerge were developed for this.<ref name="Art2GoogleBlog"/>
* '''Realistic Capture for VR/AR Content:''' Light field cameras capture real-world scenes with richer information than 360° video or basic [[photogrammetry]], preserving subtle lighting and allowing more natural exploration in VR. Systems like [[Google]]'s light field capture rigs and [[Lytro]] Immerge were developed for this.<ref name="Art2GoogleBlog"/>
* '''[[Light Field Passthrough]] for Mixed Reality:''' An emerging technique for AR/[[Mixed Reality|MR]] headsets where specialized cameras capture the light field of the real world. This allows rendering the outside view with correct depth and perspective for the user's eyes, enabling seamless blending of virtual objects with reality and minimizing reprojection errors or distortions seen in traditional video passthrough. Meta's Flamera prototype is a notable example.<ref name="Art2TeknoAsian">[https://teknoasian.com/light-field-passthrough-the-bridge-between-reality-and-virtual-worlds/ Tekno Asian: Light Field Passthrough: The Bridge Between Reality and Virtual Worlds]</ref><ref name="Art4Spectrum">[https://spectrum.ieee.org/meta-flamera IEEE Spectrum: Meta Builds AR Headset With Unrivaled Passthrough]</ref><ref name="Art4DisplayDaily">[https://www.displaydaily.com/article/display-daily/metas-perspective-correct-passthrough-mr-display Display Daily: Meta’s Perspective-Correct Passthrough MR Display]</ref>
* '''[[Light Field Passthrough]] for Mixed Reality:''' An emerging technique for AR/[[Mixed Reality|MR]] headsets where specialized cameras capture the light field of the real world. This allows rendering the outside view with correct depth and perspective for the user's eyes, enabling seamless blending of virtual objects with reality and minimizing reprojection errors or distortions seen in traditional video passthrough. Meta's Flamera prototype is a notable example.<ref name="Art2TeknoAsian">[https://teknoasian.com/light-field-passthrough-the-bridge-between-reality-and-virtual-worlds/ Tekno Asian: Light Field Passthrough: The Bridge Between Reality and Virtual Worlds]</ref><ref name="Art4Spectrum">[https://spectrum.ieee.org/meta-flamera IEEE Spectrum: Meta Builds AR Headset With Unrivaled Passthrough]</ref><ref name="Art4DisplayDaily">
Blackwood, S. (2023). “Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies.”
''Display Daily'', 4 August 2023.
https://displaydaily.com/metas-going-to-siggraph-2023-and-showing-flamera-and-butterscotch-vr-technologies/
(Accessed 3 May 2025).
</ref>
* '''[[Telepresence]] and Remote Collaboration:''' Realistic capture and display of participants using light fields can significantly enhance the sense of presence in virtual meetings and remote collaboration systems, enabling more natural eye contact and spatial interaction.<ref name="Art1Orts">Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P. L., Khamis, S., Dou, M., Tankovich, V., Loop, C., Cai, Q., Chou, P. A., Mennicken, S., Valentin, J., Pradeep, V., Wang, S., Kang, S. B., Kohli, P., Lutchyn, Y., Keskin, C., & Izadi, S. (2016). "Holoportation: Virtual 3D teleportation in real-time." Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 741-754.</ref>
* '''[[Telepresence]] and Remote Collaboration:''' Realistic capture and display of participants using light fields can significantly enhance the sense of presence in virtual meetings and remote collaboration systems, enabling more natural eye contact and spatial interaction.<ref name="Art1Orts">Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P. L., Khamis, S., Dou, M., Tankovich, V., Loop, C., Cai, Q., Chou, P. A., Mennicken, S., Valentin, J., Pradeep, V., Wang, S., Kang, S. B., Kohli, P., Lutchyn, Y., Keskin, C., & Izadi, S. (2016). "Holoportation: Virtual 3D teleportation in real-time." Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 741-754.</ref>
* '''Post-Capture Refocus and DoF Control:''' While primarily a photographic benefit, this capability could be used in VR/AR for cinematic effects, accessibility features, or interactive storytelling.
* '''Post-Capture Refocus and DoF Control:''' While primarily a photographic benefit, this capability could be used in VR/AR for cinematic effects, accessibility features, or interactive storytelling.
Line 112: Line 127:


* '''Neural Radiance Fields (NeRF) and Neural Rendering:''' These [[machine learning]] techniques are rapidly evolving, offering efficient ways to represent and render complex scenes with view-dependent effects, potentially revolutionizing light field capture and synthesis for VR/AR.<ref name="Art1Mildenhall"/>
* '''Neural Radiance Fields (NeRF) and Neural Rendering:''' These [[machine learning]] techniques are rapidly evolving, offering efficient ways to represent and render complex scenes with view-dependent effects, potentially revolutionizing light field capture and synthesis for VR/AR.<ref name="Art1Mildenhall"/>
* '''Eye-Tracked Foveated Light Fields:''' Combining [[eye tracking]] with light field rendering/display allows concentrating detail and computational resources where the user is looking ([[foveated rendering]]), making real-time performance more feasible.<ref name="Art1Kaplanyan">Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., & Rufo, G. (2019). "DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos." ACM Transactions on Graphics, 38(6), 212.</ref><ref name="Art4NvidiaResearch">[https://research.nvidia.com/publication/2017-11_Foveated-Light-field-Rendering Nvidia Research: Foveated Light-field Rendering]</ref>
* '''Eye-Tracked Foveated Light Fields:''' Combining [[eye tracking]] with light field rendering/display allows concentrating detail and computational resources where the user is looking ([[foveated rendering]]), making real-time performance more feasible.<ref name="Art1Kaplanyan">Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., & Rufo, G. (2019). "DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos." ACM Transactions on Graphics, 38(6), 212.</ref>
<ref name="Art4NvidiaResearch">
Sun, Q., Huang, F.‑C., Kim, J., et al. (2017). “Perceptually‑Guided Foveation for Light‑Field Displays.”
''ACM SIGGRAPH Asia 2017 Technical Papers''. NVIDIA Research project page.
https://research.nvidia.com/publication/2017-11_perceptually-guided-foveation-light-field-displays
(Accessed 3 May 2025).
</ref>
* '''Compact Light Field Optics:''' Development of [[metalenses]], [[diffractive optics]], novel [[waveguide]] designs, and HOEs aims to create thinner, lighter, and more efficient optics for near-eye light field displays suitable for glasses-like AR/VR devices.<ref name="Art1WangOptics">Wang, N., Hua, H., & Viegas, D. (2021). "Compact optical see-through head-mounted display with varifocal liquid membrane lens." Digital Holography and Three-Dimensional Imaging 2021, OSA Technical Digest, DM3B.3.</ref><ref name="Art4NvidiaDev"/>
* '''Compact Light Field Optics:''' Development of [[metalenses]], [[diffractive optics]], novel [[waveguide]] designs, and HOEs aims to create thinner, lighter, and more efficient optics for near-eye light field displays suitable for glasses-like AR/VR devices.<ref name="Art1WangOptics">Wang, N., Hua, H., & Viegas, D. (2021). "Compact optical see-through head-mounted display with varifocal liquid membrane lens." Digital Holography and Three-Dimensional Imaging 2021, OSA Technical Digest, DM3B.3.</ref><ref name="Art4NvidiaDev"/>
* '''Light Field Video Streaming:''' Advances in compression and network bandwidth may enable real-time streaming of light field video for immersive communication, entertainment, and training.<ref name="Art1Gutierrez">Gutiérrez-Navarro, D., & Pérez-Daniel, K. R. (2022). "Light field video streaming: A review." IEEE Access, 10, 12345-12367.</ref>
* '''Light Field Video Streaming:''' Advances in compression and network bandwidth may enable real-time streaming of light field video for immersive communication, entertainment, and training.<ref name="Art1Gutierrez">Gutiérrez-Navarro, D., & Pérez-Daniel, K. R. (2022). "Light field video streaming: A review." IEEE Access, 10, 12345-12367.</ref>