Lens array: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) No edit summary |
||
Line 2: | Line 2: | ||
Lens arrays are two-dimensional arrangements of many small lenses (often [[Microlens array]]s) that manipulate light fields for imaging or display. In [[Virtual reality]] (VR) and [[Augmented reality]] (AR) systems, lens arrays serve two broad roles: as '''display optics''' that create 3D or light-field images, and as '''sensor optics''' that capture directional light for depth and eye tracking. In displays, lens arrays enable multi-view and focal-plane rendering (e.g. light-field displays or integral imaging) by splitting the image into many sub-images corresponding to different angles or depths.<ref name="Li2019">Li X, Chen L, Li Y, et al. A broadband achromatic metalens array for integral imaging in the visible. Light Sci Appl. 2019;8:99.</ref><ref name="Ng2005">Ng R, Levoy M, Brédif M, et al. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report. 2005;2(11):1-11.</ref> In sensing, microlens-based ''plenoptic'' or light-field cameras capture the full 4D light field, allowing computational refocusing and depth estimation.<ref name="Ng2005" /> <ref name="Yang2018">Yang L, Guo Y. Eye tracking using a light field camera on a head-mounted display. US Patent Application US20180173303A1. 2018 Jun 21.</ref> Modern VR/AR prototypes leverage microlens arrays, [[Light field display]] techniques, [[Integral imaging]], holographic waveguide couplers, and specialized lens-array modules for eye tracking and depth sensing. These components appear in devices such as wide-FOV near-eye displays and optical see-through [[Head-mounted display]]s. | Lens arrays are two-dimensional arrangements of many small lenses (often [[Microlens array]]s) that manipulate light fields for imaging or display. In [[Virtual reality]] (VR) and [[Augmented reality]] (AR) systems, lens arrays serve two broad roles: as '''display optics''' that create 3D or light-field images, and as '''sensor optics''' that capture directional light for depth and eye tracking. In displays, lens arrays enable multi-view and focal-plane rendering (e.g. light-field displays or integral imaging) by splitting the image into many sub-images corresponding to different angles or depths.<ref name="Li2019">Li X, Chen L, Li Y, et al. A broadband achromatic metalens array for integral imaging in the visible. Light Sci Appl. 2019;8:99.</ref><ref name="Ng2005">Ng R, Levoy M, Brédif M, et al. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report. 2005;2(11):1-11.</ref> In sensing, microlens-based ''plenoptic'' or light-field cameras capture the full 4D light field, allowing computational refocusing and depth estimation.<ref name="Ng2005" /> <ref name="Yang2018">Yang L, Guo Y. Eye tracking using a light field camera on a head-mounted display. US Patent Application US20180173303A1. 2018 Jun 21.</ref> Modern VR/AR prototypes leverage microlens arrays, [[Light field display]] techniques, [[Integral imaging]], holographic waveguide couplers, and specialized lens-array modules for eye tracking and depth sensing. These components appear in devices such as wide-FOV near-eye displays and optical see-through [[Head-mounted display]]s. | ||
==History = | ==History== | ||
Lens-array technology traces back over a century. Gabriel Lippmann first proposed "integral photography" in 1908, capturing 3D scenes via a lens grid.<ref name="Li2019" /> Early implementations used pinhole arrays (circa 1911) and later simple microlens plates (around 1948) to record and replay light fields.<ref name="Li2019" /> In the mid-20th century, lenticular (cylindrical lens) sheets became popular for autostereoscopic prints and displays (e.g. 3D postcards and packaging), providing separate views for each eye. By the 2000s, advances in digital displays and microfabrication revived lens-array research for head-worn displays. For example, smartphone-scale integral imaging was demonstrated by pairing a display with a matching MLA.<ref name="Li2019" /> In recent years, VR/AR research has produced thin, wide-FOV near-eye displays using sophisticated lens arrays (e.g. polarization optics or metasurfaces)<ref name="Shin2023">Shin G, Lee Y, Kim J, et al. Field of view and angular-resolution enhancement in microlens array type virtual reality near-eye display using polarization grating. PubMed. 2023;39876217.</ref>, as well as compact eye-tracking and depth cameras using microlens arrays.<ref name="Yang2018" /><ref name="Microsoft2020">Microsoft. Camera comprising lens array. Patent Nweon. 2020;30768.</ref> | Lens-array technology traces back over a century. Gabriel Lippmann first proposed "integral photography" in 1908, capturing 3D scenes via a lens grid.<ref name="Li2019" /> Early implementations used pinhole arrays (circa 1911) and later simple microlens plates (around 1948) to record and replay light fields.<ref name="Li2019" /> In the mid-20th century, lenticular (cylindrical lens) sheets became popular for autostereoscopic prints and displays (e.g. 3D postcards and packaging), providing separate views for each eye. By the 2000s, advances in digital displays and microfabrication revived lens-array research for head-worn displays. For example, smartphone-scale integral imaging was demonstrated by pairing a display with a matching MLA.<ref name="Li2019" /> In recent years, VR/AR research has produced thin, wide-FOV near-eye displays using sophisticated lens arrays (e.g. polarization optics or metasurfaces)<ref name="Shin2023">Shin G, Lee Y, Kim J, et al. Field of view and angular-resolution enhancement in microlens array type virtual reality near-eye display using polarization grating. PubMed. 2023;39876217.</ref>, as well as compact eye-tracking and depth cameras using microlens arrays.<ref name="Yang2018" /><ref name="Microsoft2020">Microsoft. Camera comprising lens array. Patent Nweon. 2020;30768.</ref> | ||