Jump to content

Light field: Difference between revisions

No edit summary
m Text replacement - "e.g.," to "for example"
Line 13: Line 13:


===The Plenoptic Function===
===The Plenoptic Function===
The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).<ref name="AdelsonBergen1991"/> For many applications, this is overly complex and contains redundant information (e.g., light doesn't typically change along a straight ray in free space—radiance invariance—unless wavelength or time are critical).<ref name="WikiLF"/>
The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).<ref name="AdelsonBergen1991"/> For many applications, this is overly complex and contains redundant information (for example light doesn't typically change along a straight ray in free space—radiance invariance—unless wavelength or time are critical).<ref name="WikiLF"/>


===Simplified Light Fields===
===Simplified Light Fields===
Line 59: Line 59:


====Integral Imaging Displays====
====Integral Imaging Displays====
These use a [[microlens array]] placed over a high-resolution display panel (e.g., [[OLED]], [[LCD]]). Each microlens projects pixels underneath it into different directions, creating multiple views of the scene. Densely sampled views approximate a continuous light field, enabling [[autostereoscopic]] viewing.<ref name="Art1Martinez">Martinez-Corral, M., & Javidi, B. (2018). "Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems." Proceedings of the IEEE, 106(5), 891-908.</ref><ref name="Jones2007">Jones, A., McDowall, I., Yamada, H., Bolas, M., & Debevec, P. (2007). Rendering for an interactive 360° light field display. ''ACM Transactions on Graphics (TOG)'', 26(3), 40-es.</ref> This is effectively the inverse of a plenoptic camera.
These use a [[microlens array]] placed over a high-resolution display panel (for example [[OLED]], [[LCD]]). Each microlens projects pixels underneath it into different directions, creating multiple views of the scene. Densely sampled views approximate a continuous light field, enabling [[autostereoscopic]] viewing.<ref name="Art1Martinez">Martinez-Corral, M., & Javidi, B. (2018). "Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems." Proceedings of the IEEE, 106(5), 891-908.</ref><ref name="Jones2007">Jones, A., McDowall, I., Yamada, H., Bolas, M., & Debevec, P. (2007). Rendering for an interactive 360° light field display. ''ACM Transactions on Graphics (TOG)'', 26(3), 40-es.</ref> This is effectively the inverse of a plenoptic camera.


====Multi-Plane and Varifocal Displays====
====Multi-Plane and Varifocal Displays====
Line 68: Line 68:


====Compressive/Tensor Displays====
====Compressive/Tensor Displays====
These use multiple layers of modulating panels (e.g., LCDs) with computational algorithms to sculpt the light passing through them, synthesizing a target light field with relatively thin hardware.<ref name="Wetzstein2011">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2011). Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays. ''ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2011'', 30(4), 95:1-95:12.</ref><ref name="Art4MMCommSoc"/>
These use multiple layers of modulating panels (for example LCDs) with computational algorithms to sculpt the light passing through them, synthesizing a target light field with relatively thin hardware.<ref name="Wetzstein2011">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2011). Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays. ''ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2011'', 30(4), 95:1-95:12.</ref><ref name="Art4MMCommSoc"/>


====Projector/Pinlight Arrays====
====Projector/Pinlight Arrays====
Systems using arrays of micro-projectors or scanned beams directed onto specialized screens (e.g., [[lenticular lens|lenticular sheets]]), or near-eye displays using arrays of "pinlights" (point sources imaged through microlenses or pinholes) can also generate light fields.<ref name="Art4MMCommSoc"/>
Systems using arrays of micro-projectors or scanned beams directed onto specialized screens (for example [[lenticular lens|lenticular sheets]]), or near-eye displays using arrays of "pinlights" (point sources imaged through microlenses or pinholes) can also generate light fields.<ref name="Art4MMCommSoc"/>


====Commercial Examples and Prototypes====
====Commercial Examples and Prototypes====
Line 97: Line 97:
* Enables post-capture refocusing and depth of field adjustments (primarily capture advantage).
* Enables post-capture refocusing and depth of field adjustments (primarily capture advantage).
* Potential to significantly reduce or eliminate the vergence-accommodation conflict in HMDs, increasing comfort.
* Potential to significantly reduce or eliminate the vergence-accommodation conflict in HMDs, increasing comfort.
* Captures rich scene information useful for various computational photography and computer vision tasks (e.g., depth estimation).
* Captures rich scene information useful for various computational photography and computer vision tasks (for example depth estimation).
* Enables more seamless integration of virtual elements in AR/MR via techniques like light field passthrough.
* Enables more seamless integration of virtual elements in AR/MR via techniques like light field passthrough.


Line 104: Line 104:
* '''Computational Complexity:''' Processing and rendering light fields, especially in real-time for high-resolution VR/AR, requires substantial computational power. Optimization and [[machine learning]] approaches are active research areas.<ref name="Art1Wang">Wang, T. C., Efros, A. A., & Ramamoorthi, R. (2021). "Neural rendering and neural light transport for mixed reality." IEEE Transactions on Visualization and Computer Graphics, 27(5), 2657-2671.</ref>
* '''Computational Complexity:''' Processing and rendering light fields, especially in real-time for high-resolution VR/AR, requires substantial computational power. Optimization and [[machine learning]] approaches are active research areas.<ref name="Art1Wang">Wang, T. C., Efros, A. A., & Ramamoorthi, R. (2021). "Neural rendering and neural light transport for mixed reality." IEEE Transactions on Visualization and Computer Graphics, 27(5), 2657-2671.</ref>
* '''Capture Hardware Complexity and Cost:''' High-quality light field capture systems (plenoptic cameras, large camera arrays) remain complex, expensive, and often limited to controlled environments.<ref name="Art1Overbeck">Overbeck, R. S., Erickson, D., Evangelakos, D., Pharr, M., & Debevec, P. (2018). "A system for acquiring, processing, and rendering panoramic light field stills for virtual reality." ACM Transactions on Graphics, 37(6), 197.</ref>
* '''Capture Hardware Complexity and Cost:''' High-quality light field capture systems (plenoptic cameras, large camera arrays) remain complex, expensive, and often limited to controlled environments.<ref name="Art1Overbeck">Overbeck, R. S., Erickson, D., Evangelakos, D., Pharr, M., & Debevec, P. (2018). "A system for acquiring, processing, and rendering panoramic light field stills for virtual reality." ACM Transactions on Graphics, 37(6), 197.</ref>
* '''Display Technology Immaturity and Trade-offs:''' High-performance light field displays suitable for consumer VR/AR HMDs (high resolution, high brightness, wide [[field of view]] (FoV), large eye-box, low latency, compact form factor) are still largely under development. Current technologies often involve trade-offs, e.g., between spatial and angular resolution.<ref name="Art1Wetzstein">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). "Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting." ACM Transactions on Graphics, 31(4), 80.</ref><ref name="Art4MMCommSoc"/>
* '''Display Technology Immaturity and Trade-offs:''' High-performance light field displays suitable for consumer VR/AR HMDs (high resolution, high brightness, wide [[field of view]] (FoV), large eye-box, low latency, compact form factor) are still largely under development. Current technologies often involve trade-offs, for example between spatial and angular resolution.<ref name="Art1Wetzstein">Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). "Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting." ACM Transactions on Graphics, 31(4), 80.</ref><ref name="Art4MMCommSoc"/>
* '''Limited Angular Resolution:''' Practical systems often have limited angular resolution, which can restrict the range of parallax and the effectiveness in fully resolving VAC.
* '''Limited Angular Resolution:''' Practical systems often have limited angular resolution, which can restrict the range of parallax and the effectiveness in fully resolving VAC.
* '''Eye-Box Size:''' Some display approaches (especially holographic and integral imaging) can have a limited viewing zone (eye-box) where the effect is perceived correctly, requiring precise alignment or [[eye tracking]] compensation.<ref name="Art4MMCommSoc"/>
* '''Eye-Box Size:''' Some display approaches (especially holographic and integral imaging) can have a limited viewing zone (eye-box) where the effect is perceived correctly, requiring precise alignment or [[eye tracking]] compensation.<ref name="Art4MMCommSoc"/>