Light field: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{see also|Terms|Technical Terms}} | {{see also|Terms|Technical Terms}} | ||
A '''light field''' (also spelled '''lightfield''') is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].<ref name="LevoyHanrahan1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. ''Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH '96'', | A '''light field''' (also spelled '''lightfield''') is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].<ref name="LevoyHanrahan1996">Levoy, M., & Hanrahan, P. (1996). Light field rendering. ''Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH '96'', 31-42.</ref><ref name="Gortler1996">Gortler, S. J., Grzeszczuk, R., Szeliski, R., & Cohen, M. F. (1996). The Lumigraph. ''Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH '96'', 43-54.</ref> Essentially, it's a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].<ref name="Ng2005">Ng, R. (2005). Digital Light Field Photography. ''Ph.D. Thesis, Stanford University''.</ref><ref name="Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ''ACM SIGGRAPH 2013 Talks'', 1-1.</ref> | ||
==History== | ==History== | ||
The concept of measuring light rays has early roots. [[Michael Faraday]] first speculated in 1846 in his lecture "Thoughts on Ray Vibrations" that light should be understood as a field, similar to the [[magnetic field]] he had studied.<ref name="Faraday1846">Faraday, M. (1846). Thoughts on Ray Vibrations. ''Philosophical Magazine'', S.3, Vol. 28, No. 188.</ref> The term "light field" (''svetovoe pole'' in Russian) was more formally defined by [[Andrey Gershun]] in a classic 1936 paper on the radiometric properties of light in three-dimensional space.<ref name="Gershun1936">Gershun, A. (1939). The Light Field. ''Journal of Mathematics and Physics'', 18(1-4), | The concept of measuring light rays has early roots. [[Michael Faraday]] first speculated in 1846 in his lecture "Thoughts on Ray Vibrations" that light should be understood as a field, similar to the [[magnetic field]] he had studied.<ref name="Faraday1846">Faraday, M. (1846). Thoughts on Ray Vibrations. ''Philosophical Magazine'', S.3, Vol. 28, No. 188.</ref> The term "light field" (''svetovoe pole'' in Russian) was more formally defined by [[Andrey Gershun]] in a classic 1936 paper on the radiometric properties of light in three-dimensional space.<ref name="Gershun1936">Gershun, A. (1939). The Light Field. ''Journal of Mathematics and Physics'', 18(1-4), 51-151. (English translation of 1936 Russian paper).</ref><ref name="WikiLF">[https://en.wikipedia.org/wiki/Light_field Wikipedia: Light field]</ref> | ||
In the context of [[computer vision]] and graphics, the concept was further developed with the introduction of the 7D [[plenoptic function]] by [[Edward Adelson|Adelson]] and [[James Bergen|Bergen]] in 1991.<ref name="AdelsonBergen1991">Adelson, E. H., & Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In ''Computational Models of Visual Processing'' (pp. 3-20). MIT Press.</ref> This function describes all possible light rays, parameterized by 3D position (x, y, z), 2D direction (θ, φ), wavelength (λ), and time (t). | In the context of [[computer vision]] and graphics, the concept was further developed with the introduction of the 7D [[plenoptic function]] by [[Edward Adelson|Adelson]] and [[James Bergen|Bergen]] in 1991.<ref name="AdelsonBergen1991">Adelson, E. H., & Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In ''Computational Models of Visual Processing'' (pp. 3-20). MIT Press.</ref> This function describes all possible light rays, parameterized by 3D position (x, y, z), 2D direction (θ, φ), wavelength (λ), and time (t). | ||
Line 13: | Line 13: | ||
===The Plenoptic Function=== | ===The Plenoptic Function=== | ||
The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).<ref name="AdelsonBergen1991"/> For many applications, this is overly complex and contains redundant information ( | The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).<ref name="AdelsonBergen1991"/> For many applications, this is overly complex and contains redundant information (for example light doesn't typically change along a straight ray in free space—radiance invariance—unless wavelength or time are critical).<ref name="WikiLF"/> | ||
===Simplified Light Fields=== | ===Simplified Light Fields=== | ||
Line 43: | Line 43: | ||
* '''Interpolation and View Synthesis:''' A key advantage is generating novel viewpoints not explicitly captured. This involves interpolating the 4D light field data to estimate the scene's appearance from arbitrary positions and angles.<ref name="Art1Kalantari">Kalantari, N. K., Wang, T. C., & Ramamoorthi, R. (2016). "Learning-based view synthesis for light field cameras." ACM Transactions on Graphics, 35(6), 193.</ref> | * '''Interpolation and View Synthesis:''' A key advantage is generating novel viewpoints not explicitly captured. This involves interpolating the 4D light field data to estimate the scene's appearance from arbitrary positions and angles.<ref name="Art1Kalantari">Kalantari, N. K., Wang, T. C., & Ramamoorthi, R. (2016). "Learning-based view synthesis for light field cameras." ACM Transactions on Graphics, 35(6), 193.</ref> | ||
* '''[[Depth Estimation]]:''' The angular variation of light rays encodes depth information. Various algorithms can extract depth maps, valuable for effects like synthetic [[depth of field]] and for AR interactions.<ref name="Art1Tao">Tao, M. W., Hadap, S., Malik, J., & Ramamoorthi, R. (2013). "Depth from combining defocus and correspondence using light-field cameras." Proceedings of the IEEE International Conference on Computer Vision, 673-680.</ref> | * '''[[Depth Estimation]]:''' The angular variation of light rays encodes depth information. Various algorithms can extract depth maps, valuable for effects like synthetic [[depth of field]] and for AR interactions.<ref name="Art1Tao">Tao, M. W., Hadap, S., Malik, J., & Ramamoorthi, R. (2013). "Depth from combining defocus and correspondence using light-field cameras." Proceedings of the IEEE International Conference on Computer Vision, 673-680.</ref> | ||
* '''Compression:''' Light field datasets are massive. Efficient compression is vital for storage and transmission, especially for mobile VR/AR and streaming. Techniques often adapt existing video codecs (like VP9) or use specialized approaches.<ref name="Art1Viola">Viola, I., Rerabek, M., & Ebrahimi, T. (2017). "Comparison and evaluation of light field image coding approaches." IEEE Journal of Selected Topics in Signal Processing, 11(7), 1092-1106.</ref><ref name="Art2AugPerc">[https://augmentedperception.github.io/welcome-to-lightfields/ Augmented Perception: Welcome to Light Fields]</ref> Standards bodies like JPEG Pleno and MPEG Immersive Video are developing formats for light field data.<ref name="Art4MMCommSoc"> | * '''Compression:''' Light field datasets are massive. Efficient compression is vital for storage and transmission, especially for mobile VR/AR and streaming. Techniques often adapt existing video codecs (like VP9) or use specialized approaches.<ref name="Art1Viola">Viola, I., Rerabek, M., & Ebrahimi, T. (2017). "Comparison and evaluation of light field image coding approaches." IEEE Journal of Selected Topics in Signal Processing, 11(7), 1092-1106.</ref><ref name="Art2AugPerc">[https://augmentedperception.github.io/welcome-to-lightfields/ Augmented Perception: Welcome to Light Fields]</ref> Standards bodies like JPEG Pleno and MPEG Immersive Video are developing formats for light field data.<ref name="Art4MMCommSoc"> | ||
IEEE Communications Society – Multimedia Communications Technical Committee. | |||
''MMTC Communications – Review, Vol. 8 (No. 1), February 2017''. | |||
PDF. https://mmc.committees.comsoc.org/files/2016/04/IEEE-ComSoc-MMTC-Comm-Review-Feb-2017-Issue.pdf | |||
(Accessed 3 May 2025). | |||
</ref> | |||
==Light Field Rendering and Display== | ==Light Field Rendering and Display== | ||
Line 59: | Line 64: | ||
====Integral Imaging Displays==== | ====Integral Imaging Displays==== | ||
These use a [[microlens array]] placed over a high-resolution display panel ( | These use a [[microlens array]] placed over a high-resolution display panel (for example [[OLED]], [[LCD]]). Each microlens projects pixels underneath it into different directions, creating multiple views of the scene. Densely sampled views approximate a continuous light field, enabling [[autostereoscopic]] viewing.<ref name="Art1Martinez">Martinez-Corral, M., & Javidi, B. (2018). "Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems." Proceedings of the IEEE, 106(5), 891-908.</ref><ref name="Jones2007">Jones, A., McDowall, I., Yamada, H., Bolas, M., & Debevec, P. (2007). Rendering for an interactive 360° light field display. ''ACM Transactions on Graphics (TOG)'', 26(3), 40-es.</ref> This is effectively the inverse of a plenoptic camera. | ||
====Multi-Plane and Varifocal Displays==== | ====Multi-Plane and Varifocal Displays==== | ||
Line 65: | Line 70: | ||
====Holographic Displays==== | ====Holographic Displays==== | ||
[[Holography|Holographic]] displays reconstruct the light wavefront itself using [[spatial light modulator]]s (SLMs) to control the phase or amplitude of light. These can, in theory, perfectly reproduce the light field of a scene, offering continuous focus cues.<ref name="Art1Li">Li, G., Lee, D., Jeong, Y., Cho, J., & Lee, B. (2016). "Holographic display for see-through augmented reality using mirror-lens holographic optical element." Optics Letters, 41(11), 2486-2489.</ref> Research includes using [[Holographic Optical Elements (HOEs)]] and [[metasurface]]s for compact designs, like Nvidia's Holographic Glasses prototype.<ref name="Art4NvidiaDev"> | [[Holography|Holographic]] displays reconstruct the light wavefront itself using [[spatial light modulator]]s (SLMs) to control the phase or amplitude of light. These can, in theory, perfectly reproduce the light field of a scene, offering continuous focus cues.<ref name="Art1Li">Li, G., Lee, D., Jeong, Y., Cho, J., & Lee, B. (2016). "Holographic display for see-through augmented reality using mirror-lens holographic optical element." Optics Letters, 41(11), 2486-2489.</ref> Research includes using [[Holographic Optical Elements (HOEs)]] and [[metasurface]]s for compact designs, like Nvidia's Holographic Glasses prototype.<ref name="Art4NvidiaDev"> | ||
Kim, J. (2024). “Developing Smaller, Lighter Extended Reality Glasses Using AI.” | |||
''NVIDIA Technical Blog'', 14 June 2024. | |||
https://developer.nvidia.com/blog/developing-smaller-lighter-extended-reality-glasses-using-ai/ | |||
(Accessed 3 May 2025). | |||
</ref> | |||
====Compressive/Tensor Displays==== | ====Compressive/Tensor Displays==== | ||
These use multiple layers of modulating panels ( | These use multiple layers of modulating panels (for example LCDs) with computational algorithms to sculpt the light passing through them, synthesizing a target light field with relatively thin hardware.<ref name="Wetzstein2011">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2011). Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays. ''ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2011'', 30(4), 95:1-95:12.</ref><ref name="Art4MMCommSoc"/> | ||
====Projector/Pinlight Arrays==== | ====Projector/Pinlight Arrays==== | ||
Systems using arrays of micro-projectors or scanned beams directed onto specialized screens ( | Systems using arrays of micro-projectors or scanned beams directed onto specialized screens (for example [[lenticular lens|lenticular sheets]]), or near-eye displays using arrays of "pinlights" (point sources imaged through microlenses or pinholes) can also generate light fields.<ref name="Art4MMCommSoc"/> | ||
====Commercial Examples and Prototypes==== | ====Commercial Examples and Prototypes==== | ||
Line 89: | Line 99: | ||
* '''Enhanced Visual Fidelity and View-Dependent Effects:''' Light fields capture and reproduce complex light interactions like specular [[highlight]]s, transparency, reflections, and refractions more accurately than traditional rendering, enhancing realism.<ref name="Art1Mildenhall"/> | * '''Enhanced Visual Fidelity and View-Dependent Effects:''' Light fields capture and reproduce complex light interactions like specular [[highlight]]s, transparency, reflections, and refractions more accurately than traditional rendering, enhancing realism.<ref name="Art1Mildenhall"/> | ||
* '''Realistic Capture for VR/AR Content:''' Light field cameras capture real-world scenes with richer information than 360° video or basic [[photogrammetry]], preserving subtle lighting and allowing more natural exploration in VR. Systems like [[Google]]'s light field capture rigs and [[Lytro]] Immerge were developed for this.<ref name="Art2GoogleBlog"/> | * '''Realistic Capture for VR/AR Content:''' Light field cameras capture real-world scenes with richer information than 360° video or basic [[photogrammetry]], preserving subtle lighting and allowing more natural exploration in VR. Systems like [[Google]]'s light field capture rigs and [[Lytro]] Immerge were developed for this.<ref name="Art2GoogleBlog"/> | ||
* '''[[Light Field Passthrough]] for Mixed Reality:''' An emerging technique for AR/[[Mixed Reality|MR]] headsets where specialized cameras capture the light field of the real world. This allows rendering the outside view with correct depth and perspective for the user's eyes, enabling seamless blending of virtual objects with reality and minimizing reprojection errors or distortions seen in traditional video passthrough. Meta's Flamera prototype is a notable example.<ref name="Art2TeknoAsian">[https://teknoasian.com/light-field-passthrough-the-bridge-between-reality-and-virtual-worlds/ Tekno Asian: Light Field Passthrough: The Bridge Between Reality and Virtual Worlds]</ref><ref name="Art4Spectrum">[https://spectrum.ieee.org/meta-flamera IEEE Spectrum: Meta Builds AR Headset With Unrivaled Passthrough]</ref><ref name="Art4DisplayDaily"> | * '''[[Light Field Passthrough]] for Mixed Reality:''' An emerging technique for AR/[[Mixed Reality|MR]] headsets where specialized cameras capture the light field of the real world. This allows rendering the outside view with correct depth and perspective for the user's eyes, enabling seamless blending of virtual objects with reality and minimizing reprojection errors or distortions seen in traditional video passthrough. Meta's Flamera prototype is a notable example.<ref name="Art2TeknoAsian">[https://teknoasian.com/light-field-passthrough-the-bridge-between-reality-and-virtual-worlds/ Tekno Asian: Light Field Passthrough: The Bridge Between Reality and Virtual Worlds]</ref><ref name="Art4Spectrum">[https://spectrum.ieee.org/meta-flamera IEEE Spectrum: Meta Builds AR Headset With Unrivaled Passthrough]</ref><ref name="Art4DisplayDaily"> | ||
Blackwood, S. (2023). “Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies.” | |||
''Display Daily'', 4 August 2023. | |||
https://displaydaily.com/metas-going-to-siggraph-2023-and-showing-flamera-and-butterscotch-vr-technologies/ | |||
(Accessed 3 May 2025). | |||
</ref> | |||
* '''[[Telepresence]] and Remote Collaboration:''' Realistic capture and display of participants using light fields can significantly enhance the sense of presence in virtual meetings and remote collaboration systems, enabling more natural eye contact and spatial interaction.<ref name="Art1Orts">Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P. L., Khamis, S., Dou, M., Tankovich, V., Loop, C., Cai, Q., Chou, P. A., Mennicken, S., Valentin, J., Pradeep, V., Wang, S., Kang, S. B., Kohli, P., Lutchyn, Y., Keskin, C., & Izadi, S. (2016). "Holoportation: Virtual 3D teleportation in real-time." Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 741-754.</ref> | * '''[[Telepresence]] and Remote Collaboration:''' Realistic capture and display of participants using light fields can significantly enhance the sense of presence in virtual meetings and remote collaboration systems, enabling more natural eye contact and spatial interaction.<ref name="Art1Orts">Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P. L., Khamis, S., Dou, M., Tankovich, V., Loop, C., Cai, Q., Chou, P. A., Mennicken, S., Valentin, J., Pradeep, V., Wang, S., Kang, S. B., Kohli, P., Lutchyn, Y., Keskin, C., & Izadi, S. (2016). "Holoportation: Virtual 3D teleportation in real-time." Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 741-754.</ref> | ||
* '''Post-Capture Refocus and DoF Control:''' While primarily a photographic benefit, this capability could be used in VR/AR for cinematic effects, accessibility features, or interactive storytelling. | * '''Post-Capture Refocus and DoF Control:''' While primarily a photographic benefit, this capability could be used in VR/AR for cinematic effects, accessibility features, or interactive storytelling. | ||
Line 97: | Line 112: | ||
* Enables post-capture refocusing and depth of field adjustments (primarily capture advantage). | * Enables post-capture refocusing and depth of field adjustments (primarily capture advantage). | ||
* Potential to significantly reduce or eliminate the vergence-accommodation conflict in HMDs, increasing comfort. | * Potential to significantly reduce or eliminate the vergence-accommodation conflict in HMDs, increasing comfort. | ||
* Captures rich scene information useful for various computational photography and computer vision tasks ( | * Captures rich scene information useful for various computational photography and computer vision tasks (for example depth estimation). | ||
* Enables more seamless integration of virtual elements in AR/MR via techniques like light field passthrough. | * Enables more seamless integration of virtual elements in AR/MR via techniques like light field passthrough. | ||
Line 104: | Line 119: | ||
* '''Computational Complexity:''' Processing and rendering light fields, especially in real-time for high-resolution VR/AR, requires substantial computational power. Optimization and [[machine learning]] approaches are active research areas.<ref name="Art1Wang">Wang, T. C., Efros, A. A., & Ramamoorthi, R. (2021). "Neural rendering and neural light transport for mixed reality." IEEE Transactions on Visualization and Computer Graphics, 27(5), 2657-2671.</ref> | * '''Computational Complexity:''' Processing and rendering light fields, especially in real-time for high-resolution VR/AR, requires substantial computational power. Optimization and [[machine learning]] approaches are active research areas.<ref name="Art1Wang">Wang, T. C., Efros, A. A., & Ramamoorthi, R. (2021). "Neural rendering and neural light transport for mixed reality." IEEE Transactions on Visualization and Computer Graphics, 27(5), 2657-2671.</ref> | ||
* '''Capture Hardware Complexity and Cost:''' High-quality light field capture systems (plenoptic cameras, large camera arrays) remain complex, expensive, and often limited to controlled environments.<ref name="Art1Overbeck">Overbeck, R. S., Erickson, D., Evangelakos, D., Pharr, M., & Debevec, P. (2018). "A system for acquiring, processing, and rendering panoramic light field stills for virtual reality." ACM Transactions on Graphics, 37(6), 197.</ref> | * '''Capture Hardware Complexity and Cost:''' High-quality light field capture systems (plenoptic cameras, large camera arrays) remain complex, expensive, and often limited to controlled environments.<ref name="Art1Overbeck">Overbeck, R. S., Erickson, D., Evangelakos, D., Pharr, M., & Debevec, P. (2018). "A system for acquiring, processing, and rendering panoramic light field stills for virtual reality." ACM Transactions on Graphics, 37(6), 197.</ref> | ||
* '''Display Technology Immaturity and Trade-offs:''' High-performance light field displays suitable for consumer VR/AR HMDs (high resolution, high brightness, wide [[field of view]] (FoV), large eye-box, low latency, compact form factor) are still largely under development. Current technologies often involve trade-offs, | * '''Display Technology Immaturity and Trade-offs:''' High-performance light field displays suitable for consumer VR/AR HMDs (high resolution, high brightness, wide [[field of view]] (FoV), large eye-box, low latency, compact form factor) are still largely under development. Current technologies often involve trade-offs, for example between spatial and angular resolution.<ref name="Art1Wetzstein">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). "Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting." ACM Transactions on Graphics, 31(4), 80.</ref><ref name="Art4MMCommSoc"/> | ||
* '''Limited Angular Resolution:''' Practical systems often have limited angular resolution, which can restrict the range of parallax and the effectiveness in fully resolving VAC. | * '''Limited Angular Resolution:''' Practical systems often have limited angular resolution, which can restrict the range of parallax and the effectiveness in fully resolving VAC. | ||
* '''Eye-Box Size:''' Some display approaches (especially holographic and integral imaging) can have a limited viewing zone (eye-box) where the effect is perceived correctly, requiring precise alignment or [[eye tracking]] compensation.<ref name="Art4MMCommSoc"/> | * '''Eye-Box Size:''' Some display approaches (especially holographic and integral imaging) can have a limited viewing zone (eye-box) where the effect is perceived correctly, requiring precise alignment or [[eye tracking]] compensation.<ref name="Art4MMCommSoc"/> | ||
Line 112: | Line 127: | ||
* '''Neural Radiance Fields (NeRF) and Neural Rendering:''' These [[machine learning]] techniques are rapidly evolving, offering efficient ways to represent and render complex scenes with view-dependent effects, potentially revolutionizing light field capture and synthesis for VR/AR.<ref name="Art1Mildenhall"/> | * '''Neural Radiance Fields (NeRF) and Neural Rendering:''' These [[machine learning]] techniques are rapidly evolving, offering efficient ways to represent and render complex scenes with view-dependent effects, potentially revolutionizing light field capture and synthesis for VR/AR.<ref name="Art1Mildenhall"/> | ||
* '''Eye-Tracked Foveated Light Fields:''' Combining [[eye tracking]] with light field rendering/display allows concentrating detail and computational resources where the user is looking ([[foveated rendering]]), making real-time performance more feasible.<ref name="Art1Kaplanyan">Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., & Rufo, G. (2019). "DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos." ACM Transactions on Graphics, 38(6), 212.</ref><ref name="Art4NvidiaResearch"> | * '''Eye-Tracked Foveated Light Fields:''' Combining [[eye tracking]] with light field rendering/display allows concentrating detail and computational resources where the user is looking ([[foveated rendering]]), making real-time performance more feasible.<ref name="Art1Kaplanyan">Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., & Rufo, G. (2019). "DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos." ACM Transactions on Graphics, 38(6), 212.</ref> | ||
<ref name="Art4NvidiaResearch"> | |||
Sun, Q., Huang, F.‑C., Kim, J., et al. (2017). “Perceptually‑Guided Foveation for Light‑Field Displays.” | |||
''ACM SIGGRAPH Asia 2017 Technical Papers''. NVIDIA Research project page. | |||
https://research.nvidia.com/publication/2017-11_perceptually-guided-foveation-light-field-displays | |||
(Accessed 3 May 2025). | |||
</ref> | |||
* '''Compact Light Field Optics:''' Development of [[metalenses]], [[diffractive optics]], novel [[waveguide]] designs, and HOEs aims to create thinner, lighter, and more efficient optics for near-eye light field displays suitable for glasses-like AR/VR devices.<ref name="Art1WangOptics">Wang, N., Hua, H., & Viegas, D. (2021). "Compact optical see-through head-mounted display with varifocal liquid membrane lens." Digital Holography and Three-Dimensional Imaging 2021, OSA Technical Digest, DM3B.3.</ref><ref name="Art4NvidiaDev"/> | * '''Compact Light Field Optics:''' Development of [[metalenses]], [[diffractive optics]], novel [[waveguide]] designs, and HOEs aims to create thinner, lighter, and more efficient optics for near-eye light field displays suitable for glasses-like AR/VR devices.<ref name="Art1WangOptics">Wang, N., Hua, H., & Viegas, D. (2021). "Compact optical see-through head-mounted display with varifocal liquid membrane lens." Digital Holography and Three-Dimensional Imaging 2021, OSA Technical Digest, DM3B.3.</ref><ref name="Art4NvidiaDev"/> | ||
* '''Light Field Video Streaming:''' Advances in compression and network bandwidth may enable real-time streaming of light field video for immersive communication, entertainment, and training.<ref name="Art1Gutierrez">Gutiérrez-Navarro, D., & Pérez-Daniel, K. R. (2022). "Light field video streaming: A review." IEEE Access, 10, 12345-12367.</ref> | * '''Light Field Video Streaming:''' Advances in compression and network bandwidth may enable real-time streaming of light field video for immersive communication, entertainment, and training.<ref name="Art1Gutierrez">Gutiérrez-Navarro, D., & Pérez-Daniel, K. R. (2022). "Light field video streaming: A review." IEEE Access, 10, 12345-12367.</ref> |