Lens array: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
{{see also|Terms|Technical Terms}} | |||
Lens arrays are two-dimensional arrangements of many small lenses (often [[Microlens array]]s) that manipulate light fields for imaging or display. In [[Virtual reality]] (VR) and [[Augmented reality]] (AR) systems, lens arrays serve two broad roles: as '''display optics''' that create 3D or light-field images, and as '''sensor optics''' that capture directional light for depth and eye tracking. In displays, lens arrays enable multi-view and focal-plane rendering (e.g. light-field displays or integral imaging) by splitting the image into many sub-images corresponding to different angles or depths.<ref name="Li2019">Li X, Chen L, Li Y, et al. A broadband achromatic metalens array for integral imaging in the visible. Light Sci Appl. 2019;8:99.</ref><ref name="Ng2005">Ng R, Levoy M, Brédif M, et al. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report. 2005;2(11):1-11.</ref> In sensing, microlens-based ''plenoptic'' or light-field cameras capture the full 4D light field, allowing computational refocusing and depth estimation.<ref name="Ng2005" /> <ref name="Yang2018">Yang L, Guo Y. Eye tracking using a light field camera on a head-mounted display. US Patent Application US20180173303A1. 2018 Jun 21.</ref> Modern VR/AR prototypes leverage microlens arrays, [[Light field display]] techniques, [[Integral imaging]], holographic waveguide couplers, and specialized lens-array modules for eye tracking and depth sensing. These components appear in devices such as wide-FOV near-eye displays and optical see-through [[Head-mounted display]]s. | Lens arrays are two-dimensional arrangements of many small lenses (often [[Microlens array]]s) that manipulate light fields for imaging or display. In [[Virtual reality]] (VR) and [[Augmented reality]] (AR) systems, lens arrays serve two broad roles: as '''display optics''' that create 3D or light-field images, and as '''sensor optics''' that capture directional light for depth and eye tracking. In displays, lens arrays enable multi-view and focal-plane rendering (e.g. light-field displays or integral imaging) by splitting the image into many sub-images corresponding to different angles or depths.<ref name="Li2019">Li X, Chen L, Li Y, et al. A broadband achromatic metalens array for integral imaging in the visible. Light Sci Appl. 2019;8:99.</ref><ref name="Ng2005">Ng R, Levoy M, Brédif M, et al. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report. 2005;2(11):1-11.</ref> In sensing, microlens-based ''plenoptic'' or light-field cameras capture the full 4D light field, allowing computational refocusing and depth estimation.<ref name="Ng2005" /> <ref name="Yang2018">Yang L, Guo Y. Eye tracking using a light field camera on a head-mounted display. US Patent Application US20180173303A1. 2018 Jun 21.</ref> Modern VR/AR prototypes leverage microlens arrays, [[Light field display]] techniques, [[Integral imaging]], holographic waveguide couplers, and specialized lens-array modules for eye tracking and depth sensing. These components appear in devices such as wide-FOV near-eye displays and optical see-through [[Head-mounted display]]s. | ||
== History = | ==History = | ||
Lens-array technology traces back over a century. Gabriel Lippmann first proposed "integral photography" in 1908, capturing 3D scenes via a lens grid.<ref name="Li2019" /> Early implementations used pinhole arrays (circa 1911) and later simple microlens plates (around 1948) to record and replay light fields.<ref name="Li2019" /> In the mid-20th century, lenticular (cylindrical lens) sheets became popular for autostereoscopic prints and displays (e.g. 3D postcards and packaging), providing separate views for each eye. By the 2000s, advances in digital displays and microfabrication revived lens-array research for head-worn displays. For example, smartphone-scale integral imaging was demonstrated by pairing a display with a matching MLA.<ref name="Li2019" /> In recent years, VR/AR research has produced thin, wide-FOV near-eye displays using sophisticated lens arrays (e.g. polarization optics or metasurfaces)<ref name="Shin2023">Shin G, Lee Y, Kim J, et al. Field of view and angular-resolution enhancement in microlens array type virtual reality near-eye display using polarization grating. PubMed. 2023;39876217.</ref>, as well as compact eye-tracking and depth cameras using microlens arrays.<ref name="Yang2018" /><ref name="Microsoft2020">Microsoft. Camera comprising lens array. Patent Nweon. 2020;30768.</ref> | Lens-array technology traces back over a century. Gabriel Lippmann first proposed "integral photography" in 1908, capturing 3D scenes via a lens grid.<ref name="Li2019" /> Early implementations used pinhole arrays (circa 1911) and later simple microlens plates (around 1948) to record and replay light fields.<ref name="Li2019" /> In the mid-20th century, lenticular (cylindrical lens) sheets became popular for autostereoscopic prints and displays (e.g. 3D postcards and packaging), providing separate views for each eye. By the 2000s, advances in digital displays and microfabrication revived lens-array research for head-worn displays. For example, smartphone-scale integral imaging was demonstrated by pairing a display with a matching MLA.<ref name="Li2019" /> In recent years, VR/AR research has produced thin, wide-FOV near-eye displays using sophisticated lens arrays (e.g. polarization optics or metasurfaces)<ref name="Shin2023">Shin G, Lee Y, Kim J, et al. Field of view and angular-resolution enhancement in microlens array type virtual reality near-eye display using polarization grating. PubMed. 2023;39876217.</ref>, as well as compact eye-tracking and depth cameras using microlens arrays.<ref name="Yang2018" /><ref name="Microsoft2020">Microsoft. Camera comprising lens array. Patent Nweon. 2020;30768.</ref> | ||
Line 35: | Line 33: | ||
'''Focal cueing and depth enhancement:''' Some VR/AR displays incorporate multiple lens arrays for depth/focus manipulation. For instance, a light-field HMD may use two stacked MLA arrays (a so-called dual-focal arrangement) to enlarge the depth range so that virtual objects at different distances can appear simultaneously in focus. Polarization or liquid crystal arrays have been used to switch between focus planes. These advanced architectures aim to overcome the vergence-accommodation mismatch by aligning virtual image focus with convergence. | '''Focal cueing and depth enhancement:''' Some VR/AR displays incorporate multiple lens arrays for depth/focus manipulation. For instance, a light-field HMD may use two stacked MLA arrays (a so-called dual-focal arrangement) to enlarge the depth range so that virtual objects at different distances can appear simultaneously in focus. Polarization or liquid crystal arrays have been used to switch between focus planes. These advanced architectures aim to overcome the vergence-accommodation mismatch by aligning virtual image focus with convergence. | ||
== Applications in sensing == | ==Applications in sensing== | ||
'''Light-field (plenoptic) cameras for depth and eye tracking:''' Lens arrays are fundamental to plenoptic imaging. Placing an MLA a focal distance in front of an image sensor allows each micro-image to capture rays from different angles.<ref name="Ng2005" /> This effectively samples the full 4D light field of the scene. With computational processing, one can refocus the image after capture or compute depth maps from parallax between the micro-images.<ref name="Ng2005" /> In VR/AR, this is useful both for external depth sensing (scene reconstruction) and internal eye imaging. For example, patents describe using a light-field camera (with MLA) inside an HMD to capture the user's eye. The captured plenoptic data lets the system digitally refocus on various eye regions and compute gaze direction without needing precise IR glints.<ref name="Yang2018" /> This relaxes the geometric constraints on eye-tracker placement. Thus, microlens-based light-field cameras can support both environmental mapping and fine eye tracking in headsets. | '''Light-field (plenoptic) cameras for depth and eye tracking:''' Lens arrays are fundamental to plenoptic imaging. Placing an MLA a focal distance in front of an image sensor allows each micro-image to capture rays from different angles.<ref name="Ng2005" /> This effectively samples the full 4D light field of the scene. With computational processing, one can refocus the image after capture or compute depth maps from parallax between the micro-images.<ref name="Ng2005" /> In VR/AR, this is useful both for external depth sensing (scene reconstruction) and internal eye imaging. For example, patents describe using a light-field camera (with MLA) inside an HMD to capture the user's eye. The captured plenoptic data lets the system digitally refocus on various eye regions and compute gaze direction without needing precise IR glints.<ref name="Yang2018" /> This relaxes the geometric constraints on eye-tracker placement. Thus, microlens-based light-field cameras can support both environmental mapping and fine eye tracking in headsets. | ||
Line 43: | Line 40: | ||
'''Depth sensors:''' Outside of full light-field cameras, some depth-sensing concepts also use microlenses. One approach is a multi-aperture structured-light projector: an array of tiny beams (formed by a lens array) projects a coded IR pattern for depth triangulation. Another is embedding micro-lenses over depth-sensing pixels to increase fill factor or directivity. In practice, however, most time-of-flight and stereo cameras in VR/AR do not use discrete lens arrays (they use single large lenses or laser projectors). The main use of lens arrays in sensing is thus in light-field capture (including gaze capture) rather than typical ToF or stereo modules. | '''Depth sensors:''' Outside of full light-field cameras, some depth-sensing concepts also use microlenses. One approach is a multi-aperture structured-light projector: an array of tiny beams (formed by a lens array) projects a coded IR pattern for depth triangulation. Another is embedding micro-lenses over depth-sensing pixels to increase fill factor or directivity. In practice, however, most time-of-flight and stereo cameras in VR/AR do not use discrete lens arrays (they use single large lenses or laser projectors). The main use of lens arrays in sensing is thus in light-field capture (including gaze capture) rather than typical ToF or stereo modules. | ||
== Technical specifications == | ==Technical specifications== | ||
Lens-array designs involve several key parameters: | Lens-array designs involve several key parameters: | ||
Line 61: | Line 57: | ||
Overall, the technical design of a lens array involves a trade-off between FOV, resolution, brightness, and physical thickness. Emerging approaches like metalens arrays promise thinner optics with engineered dispersion<ref name="Li2019" />, which may shift these trade-offs in future systems. | Overall, the technical design of a lens array involves a trade-off between FOV, resolution, brightness, and physical thickness. Emerging approaches like metalens arrays promise thinner optics with engineered dispersion<ref name="Li2019" />, which may shift these trade-offs in future systems. | ||
== Challenges == | ==Challenges== | ||
Lens-array components face several challenges in VR/AR: | Lens-array components face several challenges in VR/AR: | ||
Line 77: | Line 72: | ||
'''Manufacturing scale and cost:''' Large, high-quality MLAs (especially with small lenslets) are challenging to produce over large areas. Holographic and metasurface arrays often require cleanroom fabrication. For consumer VR/AR, cost-effective replication (e.g. using nanoimprint or injection molding) is crucial but may not yet match the performance of lab prototypes. | '''Manufacturing scale and cost:''' Large, high-quality MLAs (especially with small lenslets) are challenging to produce over large areas. Holographic and metasurface arrays often require cleanroom fabrication. For consumer VR/AR, cost-effective replication (e.g. using nanoimprint or injection molding) is crucial but may not yet match the performance of lab prototypes. | ||
== Future developments == | ==Future developments== | ||
Research on lens-array technology is advancing rapidly. '''Adaptive optics''' will likely play a growing role. Arrays of liquid-crystal or shape-changing lenses could allow dynamic focus control and multi-focal displays (reducing vergence-accommodation conflict). Similarly, '''dynamic wavelength control''' (e.g. polarization or tunable filters in each lenslet) could enable spatiotemporal multiplexing for color and focus. | Research on lens-array technology is advancing rapidly. '''Adaptive optics''' will likely play a growing role. Arrays of liquid-crystal or shape-changing lenses could allow dynamic focus control and multi-focal displays (reducing vergence-accommodation conflict). Similarly, '''dynamic wavelength control''' (e.g. polarization or tunable filters in each lenslet) could enable spatiotemporal multiplexing for color and focus. | ||
Line 91: | Line 86: | ||
As VR/AR systems aim for wider FOV, thinner form factors, and better realism, custom lens-array designs will continue to evolve. Each new generation of headsets (for example, employing pancake optics, multi-zone optics, or holographic waveguides) tends to reinvigorate lens-array innovation. In sum, lens arrays remain a key enabling technology for immersive displays and interactive sensing, with ongoing research focusing on mitigating their limitations and leveraging novel materials and computation.<ref name="Achromatic2025" /><ref name="Wei2023" /> | As VR/AR systems aim for wider FOV, thinner form factors, and better realism, custom lens-array designs will continue to evolve. Each new generation of headsets (for example, employing pancake optics, multi-zone optics, or holographic waveguides) tends to reinvigorate lens-array innovation. In sum, lens arrays remain a key enabling technology for immersive displays and interactive sensing, with ongoing research focusing on mitigating their limitations and leveraging novel materials and computation.<ref name="Achromatic2025" /><ref name="Wei2023" /> | ||
== References == | ==References== | ||
<references /> | <references /> | ||
[[Category:Terms]] | |||
[[Category:Technical Terms]] | |||
[[Category:Virtual reality]] | [[Category:Virtual reality]] | ||
[[Category:Augmented reality]] | [[Category:Augmented reality]] | ||
[[Category:Optical devices]] | [[Category:Optical devices]] | ||
[[Category:Display technology]] | [[Category:Display technology]] |