Light field display
Template:DISPLAY Light Field Display (LFD) is an advanced visualization technology designed to reproduce a light field, the distribution of light rays in 3D space, including their intensity and direction.[1] Unlike conventional 2D displays or stereoscopic 3D systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.[2] This allows viewers to perceive genuine depth, parallax (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).[3][4]
This technology is considered crucial for the future of Virtual Reality (VR) and Augmented Reality (AR) because it can directly address the Vergence-accommodation conflict (VAC).[5][6] By providing correct focal cues that match the vergence information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and simulator sickness often associated with current HMDs.[7][8]
Contents
- Definition and Principles
- Key Characteristics
- History and Development
- Technical Implementations (How They Work)
- Types of Light Field Displays
- Comparison with Other 3D Display Technologies
- Content Creation
- Applications
- Applications in VR and AR
- Other Applications
- Challenges and Limitations
- Key Players and Commercial Landscape
- Future Directions and Research
- See Also
- References
Definition and Principles
A light field display aims to replicate the Plenoptic Function, a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.[9] In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).[9][10]
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer's eyes to naturally focus (accommodation) at different depths within the displayed scene, matching the depth cues provided by binocular vision (vergence).[11][12] This recreation allows users to experience:
- Full motion parallax (horizontal and vertical look-around).[13]
- Accurate occlusion cues.
- Natural focal cues, mitigating the Vergence-accommodation conflict.[14][11]
- Specular highlights and realistic reflections that change with viewpoint.
- Often, viewing without specialized eyewear (especially in non-headset formats).[15]
Key Characteristics
- Glasses-Free 3D: Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.[15][13]
- Full Parallax: True LFDs provide both horizontal and vertical parallax, unlike earlier autostereoscopic technologies that often limited parallax to side-to-side movement.[13]
- Accommodation-Convergence Conflict Resolution: A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.[11][12][16]
- Computational Requirements: Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant GPU power and bandwidth.[13][17]
- Resolution Trade-offs: A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), field of view (FoV), and depth of field.[17][18] This is often referred to as the spatio-angular resolution trade-off.
History and Development
- Early Concepts and Foundations
The underlying concept can be traced back to Michael Faraday's 1846 suggestion of light as a field[19] and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.[20] The practical groundwork for reproducing light fields was laid by Gabriel Lippmann's 1908 concept of Integral Photography ("photographie intégrale"), which used an array of small lenses to capture and reproduce light fields.[21] The modern computational understanding was significantly advanced by Adelson and Bergen's formalization of the Plenoptic Function in 1991.[22]
- Key Development Milestones
- 1908: Gabriel Lippmann introduces integral photography.[23]
- 1936: Andrey Gershun formalizes the light field mathematically.[24]
- 1991: Adelson and Bergen formalize the plenoptic function.[25]
- 1996: Levoy and Hanrahan publish work on Light Field Rendering.[26]
- 2005: Stanford Multi-camera Array demonstrated for light field capture.[27]
- 2004-2008: Early computational light field displays developed (e.g., MIT Media Lab).[28]
- 2010-2013: Introduction of multilayer, compressive, and tensor light field display concepts.[29][30]
- 2013: NVIDIA demonstrates near-eye light field display prototype for VR.[31]
- 2015 onwards: Emergence of commercial LFD products (e.g., Looking Glass Factory, Leia Inc.) and advanced prototypes (e.g., Sony, CREAL, Light Field Lab).[15][13][32]
Technical Implementations (How They Work)
Light field displays use various techniques to generate the 4D light field:
- Microlens Arrays (MLAs): A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.[17][18] This is a common approach derived from integral imaging.[17] The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.[17]
- Multilayer Displays (Stacked LCDs): Several layers of transparent display panels (typically LCDs) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative spatial light modulator, shaping light from a backlight into a complex light field.[30][33] These are often explored for near-eye displays.[18]
- Directional Backlighting: A standard display panel (e.g., LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.[34]
- Projector Arrays: Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.[13]
- Parallax Barriers: An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.[35] Often less light-efficient than MLAs.
- Waveguide Optics: Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.[36][37] This is explored for compact AR/VR systems.
- Time-Multiplexed Displays: Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.[38]
- Holographic and Diffractive Approaches: While holographic displays reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.[39] Some companies use "holographic" terminology for their high-density LFDs.[40]
Types of Light Field Displays
- Near-Eye Light Field Displays: Integrated into VR/AR HMDs. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.[11][12][18] Examples include research prototypes from NVIDIA[41] and academic groups,[42] and commercial modules from companies like CREAL.[43] Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.[18][43]
- Tabletop/Desktop Displays: Provide glasses-free 3D for individual or small group viewing. Used for professional visualization, gaming, communication, and content creation.[15][13] Looking Glass Factory is a key player here, offering various sizes like the Looking Glass Portrait and the larger Looking Glass 27".[15][44] Leia Inc. also targets this market with monitor and mobile displays.[13] Typically use MLA or barrier technology.
- Large Format / Tiled Displays: Aimed at creating large-scale, immersive "holographic" experiences without glasses for public venues, command centers, or collaborative environments.[45][46] Light Field Lab's SolidLight™ platform uses modular panels designed to be tiled into large video walls.[45][47] Sony's ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.[48]
Comparison with Other 3D Display Technologies
| Technology | Glasses Required | Natural Focal Cues (Solves VAC) | Full Motion Parallax | Typical View Field | Key Trade-offs | |-----------------------------|------------------|---------------------------------|----------------------|--------------------|--------------------------------------------------------| | Light Field Displays | No (often) | Yes | Yes | Limited to Wide | Spatio-angular resolution trade-off, computation needs | | Stereoscopic Displays | Yes | No | No (head tracking req.) | Wide | VAC causes fatigue, requires glasses | | Autostereoscopic (non-LFD) | No | No | Limited (often H only) | Limited | Reduced resolution per view, fixed viewing zones | | Volumetric Display | No | Yes | Yes | 360° potential | Limited resolution, transparency/opacity issues, bulk | | Holographic Displays | No | Yes | Yes | Often Limited | Extreme computational demands, speckle, small size |
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.[17][18]
Content Creation
Creating content compatible with LFDs requires capturing or generating directional view information:
- Light Field Cameras / Plenoptic Cameras: Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).[9] The captured data can be processed for LFD playback.
- Computer Graphics Rendering: Standard 3D scenes built in engines like Unity or Unreal Engine can be rendered from multiple viewpoints to generate the necessary data.[45][49] Specialized light field rendering techniques, potentially using ray tracing or neural methods like Neural Radiance Fields (NeRF), are employed.[45][50]
- Photogrammetry and 3D Scanning: Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.
- Existing 3D Content Conversion: Plugins and software tools (e.g., provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.[49]
- Focal Stack Conversion: Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.[30]
Applications
- Applications in VR and AR
- Enhanced Realism and Immersion: Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.[11][43]
- Improved Visual Comfort: Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.[14][12]
- Natural Interaction: Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.[43]
- Seamless AR Integration: Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.
- Vision Correction: Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user's refractive errors, eliminating the need for prescription glasses within the headset.[43][51]
- Other Applications
- Medical Imaging and Visualization: Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.[52]
- Scientific Visualization: Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.[53]
- Digital Signage and Advertising: Eye-catching glasses-free 3D displays for retail and public spaces.[15]
- Product Design and Engineering (CAD/CAE): Collaborative visualization and review of 3D models.[54]
- Entertainment and Gaming: Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.[45]
- Automotive Displays: Heads-up displays (HUDs) or dashboards presenting information at appropriate depths.[55]
- Telepresence and Communication: Creating realistic, life-sized 3D representations of remote collaborators, like Google's Project Starline concept.[56]
- Microscopy: Viewing microscopic samples with natural depth perception.[9]
Challenges and Limitations
- Spatio-Angular Resolution Trade-off: Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.[17][18]
- Computational Complexity & Bandwidth: Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.[13][45]
- Manufacturing Complexity and Cost: Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.[45]
- Form Factor and Miniaturization: Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.[18][43]
- Limited Field of View (FoV): Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.[18]
- Brightness and Efficiency: Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.
- Content Ecosystem: The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D.[49]
- Visual Artifacts: Potential issues include moiré effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.
Key Players and Commercial Landscape
Several companies and research groups are active in LFD development:
- Looking Glass Factory: Leader in desktop/tabletop glasses-free LFDs (Looking Glass Portrait, 27", 65") for creators and enterprises.[15][49]
- Leia Inc.: Develops LFD technology for mobile devices (e.g., Lume Pad), monitors, and automotive, often switchable between 2D and 3D LFD modes.[13] Acquired competitor Dimenco.
- CREAL: Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.[43]
- Light Field Lab: Developing large-scale, modular "holographic" LFD panels (SolidLight™) based on proprietary waveguide technology.[57][47]
- Sony: Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.[58]
- Avegant: Develops light field light engines, particularly for AR, focusing on VAC resolution.[59]
- Holografika: Offers glasses-free 3D LFD systems for professional applications.[60]
- Japan Display Inc. (JDI): Demonstrated prototype LFDs for various applications.[61]
- NVIDIA: Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.[41][18]
- Google: Research in LFDs, demonstrated through concepts like Project Starline.[62]
- Academic Research: Institutions like MIT Media Lab, Stanford University, University of Arizona, and others continue to push theoretical and practical boundaries.[30][17][18]
Future Directions and Research
- Computational Display Optimization: Using AI and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.[30] Using neural representations (like NeRF) for efficient light field synthesis and compression.[63]
- Varifocal and Multifocal Integration: Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.[18][64]
- Miniaturization for Wearables: Developing ultra-thin, efficient components using metasurfaces, holographic optical elements (HOEs), advanced waveguides, and MicroLED displays for integration into consumer AR/VR glasses.[43][65]
- Improved Content Capture and Creation Tools: Advancements in plenoptic cameras, AI-driven view synthesis, and streamlined software workflows.[63]
- Higher Resolution and Efficiency: Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (e.g., polarization multiplexing[66]), and display technologies.
See Also
- Light Field
- Plenoptic Function
- Integral imaging
- Autostereoscopic display
- Stereoscopy
- Holographic display
- Volumetric Display
- Varifocal display
- Vergence-accommodation conflict
- Virtual Reality
- Augmented Reality
- Head-mounted display
- Microlens array
- Spatial Light Modulator
References
- ↑ Wetzstein, G. (2020). Computational Displays: Achieving the Full Plenoptic Function. In SIGGRAPH Courses. Association for Computing Machinery. doi:10.1145/3388769.3407420
- ↑ Wetzstein, G., Lanman, D., Hirsch, M., & Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576
- ↑ Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27
- ↑ Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we've ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024
- ↑ Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That's Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr
- ↑ Zhang, Z., Yan, X., Zhang, Y., Liu, Y., & Peng, Y. (2021). Vergence-accommodation conflict in optical see-through display: review and prospect. Opto-Electronic Advances, 4(9), 210003. doi:10.29026/oea.2021.210003
- ↑ Near-eye light field display - XinReality Wiki. Retrieved from https://xinreality.com/wiki/Near-eye_light_field_display
- ↑ CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/
- ↑ 9.0 9.1 9.2 9.3 Cite error: Invalid
<ref>
tag; no text was provided for refs namedWetzsteinPlenoptic
- ↑ Huang, F. C., Wetzstein, G., Barsky, B. A., & Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122
- ↑ 11.0 11.1 11.2 11.3 11.4 Cite error: Invalid
<ref>
tag; no text was provided for refs namedXinRealityWiki
- ↑ 12.0 12.1 12.2 12.3 Cite error: Invalid
<ref>
tag; no text was provided for refs namedCrealWebsite
- ↑ 13.00 13.01 13.02 13.03 13.04 13.05 13.06 13.07 13.08 13.09 Cite error: Invalid
<ref>
tag; no text was provided for refs namedLeiaVerge
- ↑ 14.0 14.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedWiredVAC
- ↑ 15.0 15.1 15.2 15.3 15.4 15.5 15.6 Cite error: Invalid
<ref>
tag; no text was provided for refs namedLookingGlass27
- ↑ Lanman, D., & Luebke, D. (2020). Near-Eye Light Field Displays for VR and AR. SIGGRAPH Courses. doi:10.1145/3388769.3407421
- ↑ 17.0 17.1 17.2 17.3 17.4 17.5 17.6 17.7 Cite error: Invalid
<ref>
tag; no text was provided for refs namedHuang2014EyeglassesFree
- ↑ 18.00 18.01 18.02 18.03 18.04 18.05 18.06 18.07 18.08 18.09 18.10 18.11 Cite error: Invalid
<ref>
tag; no text was provided for refs namedLanman2020NearEyeCourse
- ↑ Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field
- ↑ Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon & G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).
- ↑ Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100
- ↑ Adelson, E. H., & Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy & J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedLippmann1908
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedGershun1936
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAdelsonBergen1991
- ↑ Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96), 31-42. doi:10.1145/237170.237193
- ↑ Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., & Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH '05), 765-776. doi:10.1145/1186822.1073256
- ↑ Matusik, W., & Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH '04), 814–824. doi:10.1145/1186562.1015805
- ↑ Lanman, D., Hirsch, M., Kim, Y., & Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10), Article 163. doi:10.1145/1882261.1866191
- ↑ 30.0 30.1 30.2 30.3 30.4 Cite error: Invalid
<ref>
tag; no text was provided for refs namedWetzsteinTensor
- ↑ Lanman, D., & Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf
- ↑ Lang, B. (2023, January 11). CREAL's Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth & Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedLanman2010ContentAdaptive
- ↑ Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., & Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983
- ↑ Japan Display Inc. News (2016, December 5). Ultra-High Resolution Display with Integrated Parallax Barrier for Glasses-Free 3D. Retrieved from https://www.j-display.com/english/news/2016/20161205.html
- ↑ Light Field Lab. SolidLight Platform. Retrieved from https://www.lightfieldlab.com/solidlight
- ↑ Maimone, A., Georgiou, A., & Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624
- ↑ Liu, S., Cheng, D., & Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403
- ↑ Martínez-Corral, M., Guan, Z., Li, Y., Xiong, Z., & Javidi, B. (2021). Review of light field technologies. Visual Computing for Industry, Biomedicine, and Art, 4(1), 29. doi:10.1186/s42492-021-00096-8
- ↑ IEEE Spectrum (2021, October 7). Light Field Lab Claims to Have Cracked the Code for Real Holograms. Retrieved from https://spectrum.ieee.org/light-field-lab-holograms
- ↑ 41.0 41.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedNvidiaNELD
- ↑ Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943
- ↑ 43.0 43.1 43.2 43.3 43.4 43.5 43.6 43.7 Cite error: Invalid
<ref>
tag; no text was provided for refs namedCrealRoadToVR
- ↑ Looking Glass Factory. Software Overview. Retrieved from https://lookingglassfactory.com/software
- ↑ 45.0 45.1 45.2 45.3 45.4 45.5 45.6 Cite error: Invalid
<ref>
tag; no text was provided for refs namedLightFieldLabHolographic
- ↑ Light Field Lab Press Release (2021, October 7). Light Field Lab Demonstrates SolidLight™, the Highest Resolution Holographic Display Platform Ever Designed. Retrieved from https://www.lightfieldlab.com/press/light-field-lab-demonstrates-solidlight
- ↑ 47.0 47.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedLightFieldLabSolidLightPR
- ↑ Sony Professional. Sony's Spatial Reality Display. Retrieved from https://pro.sony/ue_US/products/professional-displays/elf-sr2
- ↑ 49.0 49.1 49.2 49.3 Cite error: Invalid
<ref>
tag; no text was provided for refs namedLookingGlassSoftware
- ↑ Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24
- ↑ Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943
- ↑ Nam, J., McCormick, M., & Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785
- ↑ Halle, M. W., & Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedNam2019Medical
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedJDI_Parallax
- ↑ Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedLightFieldLabTech
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedSonyELFSR2
- ↑ PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html
- ↑ Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/
- ↑ Japan Display Inc. News (2019, December 3). JDI Develops World's First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedStarline
- ↑ 63.0 63.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedMildenhall2020NeRF
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedLiu2014OSTHMD
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedSpringerReview2021
- ↑ Resolution enhancement of light field displays using a polarization multiplexing panel. (2025). Optics Communications, 583, 130858. doi:10.1016/j.optcom.2025.130858