Microlens Arrays: Difference between revisions
Appearance
Xinreality (talk | contribs) No edit summary |
Xinreality (talk | contribs) m Text replacement - "e.g.," to "for example" |
||
Line 13: | Line 13: | ||
** [[Fresnel lens|Fresnel-type]] (using concentric rings to achieve lens properties with reduced thickness) | ** [[Fresnel lens|Fresnel-type]] (using concentric rings to achieve lens properties with reduced thickness) | ||
Aspherical and freeform profiles are crucial for minimizing aberrations (like spherical aberration and [[chromatic aberration]]) and achieving specific beam shaping or imaging performance, especially off-axis<ref name="RPPhotonics"/><ref name="OpticalComponents"/><ref name="UltraThinMLA"/>. Cylindrical lenses focus light along a line rather than to a point and are used in applications like barcode scanners or laser line generation<ref name="OpticalComponents"/><ref name="GDOptics"/>. Lens profiles can be continuous surface type or stepped (diffractive)<ref name="StandardMLA"/>. | Aspherical and freeform profiles are crucial for minimizing aberrations (like spherical aberration and [[chromatic aberration]]) and achieving specific beam shaping or imaging performance, especially off-axis<ref name="RPPhotonics"/><ref name="OpticalComponents"/><ref name="UltraThinMLA"/>. Cylindrical lenses focus light along a line rather than to a point and are used in applications like barcode scanners or laser line generation<ref name="OpticalComponents"/><ref name="GDOptics"/>. Lens profiles can be continuous surface type or stepped (diffractive)<ref name="StandardMLA"/>. | ||
*'''Array Pattern and [[Fill Factor]]:''' Common arrangements include square and hexagonal grids, which offer regular packing<ref name="RPPhotonics"/><ref name="ShanghaiOptics"/>. Hexagonal packing typically allows for a higher [[fill factor]] (the ratio of the optically active area to the total array area) compared to square packing of circular lenses, potentially exceeding 90%<ref name="WikiFillFactor">Microlens array fill factor - Wikipedia</ref><ref name="TEMICON"/>. High fill factors ( | *'''Array Pattern and [[Fill Factor]]:''' Common arrangements include square and hexagonal grids, which offer regular packing<ref name="RPPhotonics"/><ref name="ShanghaiOptics"/>. Hexagonal packing typically allows for a higher [[fill factor]] (the ratio of the optically active area to the total array area) compared to square packing of circular lenses, potentially exceeding 90%<ref name="WikiFillFactor">Microlens array fill factor - Wikipedia</ref><ref name="TEMICON"/>. High fill factors (for example up to 98% or more for square arrays, potentially >99% for gapless hexagonal arrays) are often desired to maximize light throughput, ensure uniform illumination, and avoid energy loss or undesirable diffraction effects like zero-order [[hotspot]]s in homogenization applications<ref name="ShanghaiOptics"/><ref name="AvantierIntro"/><ref name="Newport">Microlens Arrays - Newport</ref><ref name="TEMICON"/>. Some fabrication methods allow for "gapless" arrays where lenslets meet edge-to-edge<ref name="TEMICON"/>. Random or stochastic arrangements are also possible for specific diffusion or security applications<ref name="TEMICON"/>. | ||
*'''Pitch:''' The center-to-center distance between adjacent lenslets is known as the pitch. This can range widely, from millimeters down to a few micrometers or even less, depending on the application and fabrication limits<ref name="RPPhotonics"/><ref name="StandardMLA"/><ref name="MeetOptics">Lens Arrays | Microlens Array - MEETOPTICS</ref>. | *'''Pitch:''' The center-to-center distance between adjacent lenslets is known as the pitch. This can range widely, from millimeters down to a few micrometers or even less, depending on the application and fabrication limits<ref name="RPPhotonics"/><ref name="StandardMLA"/><ref name="MeetOptics">Lens Arrays | Microlens Array - MEETOPTICS</ref>. | ||
*'''[[Focal Length]] and [[Numerical Aperture]] (NA):''' The focal length of the individual lenslets determines their focusing power. Available focal lengths range from sub-millimeter to tens or hundreds of millimeters<ref name="StandardMLA"/><ref name="NewportMALS18">MALS18 Micro Lens Array - Newport</ref><ref name="MeetOptics"/>. The NA describes the range of angles over which the lens can accept or emit light. | *'''[[Focal Length]] and [[Numerical Aperture]] (NA):''' The focal length of the individual lenslets determines their focusing power. Available focal lengths range from sub-millimeter to tens or hundreds of millimeters<ref name="StandardMLA"/><ref name="NewportMALS18">MALS18 Micro Lens Array - Newport</ref><ref name="MeetOptics"/>. The NA describes the range of angles over which the lens can accept or emit light. | ||
Line 21: | Line 21: | ||
==Fabrication Methods== | ==Fabrication Methods== | ||
Manufacturing microlens arrays requires specialized techniques capable of creating microscale optical structures with high precision and often in large volumes: | Manufacturing microlens arrays requires specialized techniques capable of creating microscale optical structures with high precision and often in large volumes: | ||
*'''[[Photolithography]] and Etching:''' This is a foundational technique borrowed from the [[semiconductor]] industry. A pattern is defined using a [[photomask]] and [[photoresist]] on a substrate. Subsequent etching processes ( | *'''[[Photolithography]] and Etching:''' This is a foundational technique borrowed from the [[semiconductor]] industry. A pattern is defined using a [[photomask]] and [[photoresist]] on a substrate. Subsequent etching processes (for example [[wet etching]] or [[dry etching]] like [[Reactive-ion etching|reactive ion etching (RIE)]]) transfer the pattern into the substrate material, creating the lens structures<ref name="OpticalComponents"/><ref name="Newport"/><ref name="AvantierIntro"/><ref name="ReviewFab">(2024-12-09) Fabrication of Microlens Array and Its Application: A Review - ResearchGate (PDF)</ref><ref name="Syntec"/>. Multiple etching steps can create multi-level diffractive structures<ref name="StandardMLA"/>. Grayscale photolithography uses masks with varying transparency to directly pattern 3D lens profiles<ref name="WikiFabrication">Microlens Fabrication - Wikipedia</ref>. | ||
*'''Photoresist Reflow:''' In this method, cylindrical pillars of photoresist are first patterned using lithography. The substrate is then heated above the resist's glass transition temperature, causing the pillars to melt and reflow. [[Surface tension]] naturally pulls the resist into a spherical cap shape (lenslet). This resist lens pattern can then be transferred into the underlying substrate ( | *'''Photoresist Reflow:''' In this method, cylindrical pillars of photoresist are first patterned using lithography. The substrate is then heated above the resist's glass transition temperature, causing the pillars to melt and reflow. [[Surface tension]] naturally pulls the resist into a spherical cap shape (lenslet). This resist lens pattern can then be transferred into the underlying substrate (for example fused silica) using RIE, or the reflowed resist structure itself can serve as a mold for replication<ref name="TEMICON"/><ref name="ReviewFab"/><ref name="Article1Ref_Daly1990"/>. | ||
*'''[[Laser Direct Writing]] (LDW):''' High-precision lasers are used to directly shape the lenslets. This can involve selectively hardening a photosensitive material (like a photopolymer, | *'''[[Laser Direct Writing]] (LDW):''' High-precision lasers are used to directly shape the lenslets. This can involve selectively hardening a photosensitive material (like a photopolymer, for example via [[two-photon polymerization]]<ref name="Article1Ref_Gissibl2016">Gissibl, T., Thiele, S., Herkommer, A., & Giessen, H. (2016). Two-photon direct laser writing of ultracompact multi-lens objectives. Nature Photonics, 10(8), 554-560.</ref>) or ablating material from the substrate surface<ref name="OpticalComponents"/><ref name="AvantierIntro"/><ref name="ApolloOptics"/>. LDW offers great flexibility in creating complex, aspheric, or freeform profiles but can be slower and more expensive than mask-based methods, though it is used for creating high-quality masters for replication<ref name="OpticalComponents"/><ref name="ApolloOptics"/>. | ||
*'''[[Nanoimprint Lithography]] (NIL) / [[Hot Embossing]]:''' These are replication techniques. A master mold (stamp) containing the negative pattern of the MLA is created ( | *'''[[Nanoimprint Lithography]] (NIL) / [[Hot Embossing]]:''' These are replication techniques. A master mold (stamp) containing the negative pattern of the MLA is created (for example using LDW or etching). This mold is then pressed into a softened thermoplastic material (hot embossing) or a [[UV]]-curable resin (UV-NIL) coated on a substrate. After hardening (cooling or UV curing), the mold is removed, leaving the MLA structure replicated on the substrate<ref name="OpticalComponents"/><ref name="TEMICON"/><ref name="AvantierIntro"/><ref name="ReviewFab"/>. These methods are suitable for cost-effective high-volume production<ref name="TEMICON"/>. [[Soft lithography]] uses similar principles with elastomeric stamps<ref name="Article1Ref_Rogers2005">Rogers, J. A., & Nuzzo, R. G. (2005). Recent progress in soft lithography. Materials Today, 8(2), 50-56.</ref>. | ||
*'''[[Injection Molding]]:''' Similar to NIL, this is a replication method suitable for mass production. A mold insert containing the negative MLA structure is placed in an injection molding machine. Molten optical-grade plastic (or sometimes specialized glass - Precision Glass Molding) is injected into the mold cavity. After cooling and solidifying, the finished MLA part is ejected<ref name="OpticalComponents"/><ref name="TEMICON"/><ref name="ApolloOptics"/><ref name="IsuzuGlass"/>. Precision Glass Molding (PGM) uses polished glass preforms (blanks) heated to their transition temperature and pressed into shape by molds, offering high precision for glass MLAs<ref name="GDOptics"/><ref name="ReviewFab"/>. | *'''[[Injection Molding]]:''' Similar to NIL, this is a replication method suitable for mass production. A mold insert containing the negative MLA structure is placed in an injection molding machine. Molten optical-grade plastic (or sometimes specialized glass - Precision Glass Molding) is injected into the mold cavity. After cooling and solidifying, the finished MLA part is ejected<ref name="OpticalComponents"/><ref name="TEMICON"/><ref name="ApolloOptics"/><ref name="IsuzuGlass"/>. Precision Glass Molding (PGM) uses polished glass preforms (blanks) heated to their transition temperature and pressed into shape by molds, offering high precision for glass MLAs<ref name="GDOptics"/><ref name="ReviewFab"/>. | ||
*'''[[Diamond Turning]]:''' Ultra-precision lathes equipped with diamond cutting tools can directly machine the MLA structures onto suitable substrate materials (metals for molds, or some IR materials like silicon or polymers directly)<ref name="Syntec"/>. It's highly accurate but generally used for prototyping or creating master molds due to its serial nature. | *'''[[Diamond Turning]]:''' Ultra-precision lathes equipped with diamond cutting tools can directly machine the MLA structures onto suitable substrate materials (metals for molds, or some IR materials like silicon or polymers directly)<ref name="Syntec"/>. It's highly accurate but generally used for prototyping or creating master molds due to its serial nature. | ||
Line 34: | Line 34: | ||
===Display Optics=== | ===Display Optics=== | ||
*'''Compact Magnifiers / Eyepieces:''' One key application is replacing bulky single-element eyepiece lenses (like traditional refractive lenses or even Fresnel lenses) with MLAs positioned between the [[microdisplay]] and the user's eye<ref name="PhotonicsArticle">(2023-05-10) Advanced Study of Optical Imaging Systems for Virtual Reality Head-Mounted Displays - Photonics</ref><ref name="PolGrating">(2025-01-02) Field of view and angular-resolution enhancement in microlens array type virtual reality near-eye display using polarization grating - Optica Publishing Group</ref><ref name="SuperlensMLA">Compact near-eye display system using a superlens-based microlens array magnifier - Optica Publishing Group (mentions MLA as magnifier)</ref>. Each lenslet magnifies a portion of the microdisplay image. This architecture holds the potential for significantly thinner and lighter [[Head-Mounted Display|HMDs]]<ref name="ThinVR">(2024-10-22) ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays - ResearchGate</ref><ref name="AzumaThinVR">ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays - Ronald Azuma (PDF of paper)</ref>. | *'''Compact Magnifiers / Eyepieces:''' One key application is replacing bulky single-element eyepiece lenses (like traditional refractive lenses or even Fresnel lenses) with MLAs positioned between the [[microdisplay]] and the user's eye<ref name="PhotonicsArticle">(2023-05-10) Advanced Study of Optical Imaging Systems for Virtual Reality Head-Mounted Displays - Photonics</ref><ref name="PolGrating">(2025-01-02) Field of view and angular-resolution enhancement in microlens array type virtual reality near-eye display using polarization grating - Optica Publishing Group</ref><ref name="SuperlensMLA">Compact near-eye display system using a superlens-based microlens array magnifier - Optica Publishing Group (mentions MLA as magnifier)</ref>. Each lenslet magnifies a portion of the microdisplay image. This architecture holds the potential for significantly thinner and lighter [[Head-Mounted Display|HMDs]]<ref name="ThinVR">(2024-10-22) ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays - ResearchGate</ref><ref name="AzumaThinVR">ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays - Ronald Azuma (PDF of paper)</ref>. | ||
*'''Wide Field of View (FOV) Systems:''' To achieve ultra-wide FOVs ( | *'''Wide Field of View (FOV) Systems:''' To achieve ultra-wide FOVs (for example 180° horizontally, approaching the human visual system's range) while maintaining a compact form factor, researchers are exploring the use of curved MLAs paired with curved displays<ref name="PhotonicsArticle"/><ref name="ThinVR"/><ref name="AzumaThinVR"/>. In such wide-FOV systems, many lenslets are viewed significantly off-axis. To manage image quality (for example reduce distortion, maintain [[eye box]] size) across the entire FOV, heterogeneous MLAs are crucial. In these arrays, the properties (for example shape, focal length, tilt) of the lenslets are custom-designed and vary systematically across the array<ref name="ThinVR"/><ref name="AzumaThinVR"/>. Optimization algorithms are used to design these complex heterogeneous lenslet profiles<ref name="ThinVR"/><ref name="AzumaThinVR"/>. | ||
*'''[[Light Field Display]]s:''' MLAs are a cornerstone technology for creating near-eye light field displays<ref name="NvidiaLFD">Nvidia Near-Eye Light Field Display - LightField Forum</ref><ref name="NvidiaSupp">Supplementary Material: Near-Eye Light Field Displays - Research at NVIDIA (PDF)</ref><ref name="TAMULFD">Light-field Display Technical Deep Dive - Texas A&M College of Architecture (PDF)</ref><ref name="ResearchGateLFD">Design and simulation of a light field display - ResearchGate</ref>. By placing a precisely aligned MLA over a high-resolution microdisplay, the light rays originating from different pixels under each lenslet can be controlled in direction<ref name="NvidiaLFD"/><ref name="ResearchGateLFD"/>. Each lenslet projects a "micro-image" (sometimes called an elemental image or [[Hogel|hogel]]) composed of pixels underneath it, and effectively acts as a projector sending different information in different directions<ref name="TAMULFD"/><ref name="OpticaLFD">(2021-09-29) Examining the utility of pinhole-type screens for lightfield display - Optica Publishing Group</ref>. This allows the display to reconstruct a light field that approximates the light rays that would emanate from a real 3D scene. Crucially, this enables the viewer's eye to naturally change focus to different depths within the virtual scene, potentially resolving the [[vergence-accommodation conflict]] (VAC) that plagues conventional stereoscopic displays and causes eye strain<ref name="ElectrowettingLFD">Fabrication of an electrowetting liquid microlens array for a focus tunable integral imaging system - Optica Publishing Group</ref><ref name="Creal">VR IN FOCUS - Creal (PDF)</ref>. This technique is closely related to [[integral imaging]]<ref name="ResearchGateLFD"/><ref name="ElectrowettingLFD"/>. An early NVIDIA research prototype used 1280x720 OLED microdisplays with 1.0 mm pitch microlens arrays (focal length ~3.3 mm) to achieve a 29°x16° light field FOV<ref name="NvidiaSupp"/>. A key challenge remains the trade-off between spatial resolution (image sharpness) and angular resolution (number of views/[[depth cue]]s)<ref name="ResearchGateLFD"/><ref name="TAMULFD"/><ref name="Creal"/>. | *'''[[Light Field Display]]s:''' MLAs are a cornerstone technology for creating near-eye light field displays<ref name="NvidiaLFD">Nvidia Near-Eye Light Field Display - LightField Forum</ref><ref name="NvidiaSupp">Supplementary Material: Near-Eye Light Field Displays - Research at NVIDIA (PDF)</ref><ref name="TAMULFD">Light-field Display Technical Deep Dive - Texas A&M College of Architecture (PDF)</ref><ref name="ResearchGateLFD">Design and simulation of a light field display - ResearchGate</ref>. By placing a precisely aligned MLA over a high-resolution microdisplay, the light rays originating from different pixels under each lenslet can be controlled in direction<ref name="NvidiaLFD"/><ref name="ResearchGateLFD"/>. Each lenslet projects a "micro-image" (sometimes called an elemental image or [[Hogel|hogel]]) composed of pixels underneath it, and effectively acts as a projector sending different information in different directions<ref name="TAMULFD"/><ref name="OpticaLFD">(2021-09-29) Examining the utility of pinhole-type screens for lightfield display - Optica Publishing Group</ref>. This allows the display to reconstruct a light field that approximates the light rays that would emanate from a real 3D scene. Crucially, this enables the viewer's eye to naturally change focus to different depths within the virtual scene, potentially resolving the [[vergence-accommodation conflict]] (VAC) that plagues conventional stereoscopic displays and causes eye strain<ref name="ElectrowettingLFD">Fabrication of an electrowetting liquid microlens array for a focus tunable integral imaging system - Optica Publishing Group</ref><ref name="Creal">VR IN FOCUS - Creal (PDF)</ref>. This technique is closely related to [[integral imaging]]<ref name="ResearchGateLFD"/><ref name="ElectrowettingLFD"/>. An early NVIDIA research prototype used 1280x720 OLED microdisplays with 1.0 mm pitch microlens arrays (focal length ~3.3 mm) to achieve a 29°x16° light field FOV<ref name="NvidiaSupp"/>. A key challenge remains the trade-off between spatial resolution (image sharpness) and angular resolution (number of views/[[depth cue]]s)<ref name="ResearchGateLFD"/><ref name="TAMULFD"/><ref name="Creal"/>. | ||
*'''Efficiency, Brightness, and [[Screen-door effect]] Reduction:''' MLAs can improve the overall light efficiency and perceived quality of display systems. In projectors or backlit displays like [[LCD]]s, MLAs can be used to focus light specifically onto the active (transmitting) area of each pixel, reducing light absorption by the surrounding pixel structure ( | *'''Efficiency, Brightness, and [[Screen-door effect]] Reduction:''' MLAs can improve the overall light efficiency and perceived quality of display systems. In projectors or backlit displays like [[LCD]]s, MLAs can be used to focus light specifically onto the active (transmitting) area of each pixel, reducing light absorption by the surrounding pixel structure (for example [[thin-film transistor]]s)<ref name="ShanghaiOptics"/><ref name="RPPhotonics"/>. This increases brightness and reduces power consumption<ref name="BrightView"/><ref name="ApolloOptics"/>. Manufacturers like Sony have applied MLAs over [[OLED]] microdisplays to increase peak luminance<ref name="DisplayDailySony">(Nov 2021) Emerging Display Technologies for the AR/VR Market - Display Daily</ref>. By directing light more effectively or magnifying the apparent pixel area, MLAs can also help reduce the visible "screen door effect" (the dark gaps between pixels)<ref name="Article1Ref_Lanman2013">Lanman, D., & Luebke, D. (2013). Near-eye light field displays. ACM Transactions on Graphics, 32(6), 1-10.</ref>. Furthermore, MLA-based eyepiece designs can offer inherently better light efficiency compared to polarization-dependent folded optical paths used in [[pancake lens]] designs. Pancake lenses achieve thin form factors by folding the optical path using [[polarizer]]s and [[half-mirror]]s, but this process typically results in very low light efficiency (often cited as 10-25%)<ref name="PolGrating"/><ref name="PancakeReview">(2024-12-09) Fabrication of Microlens Array and Its Application: A Review - ResearchGate (PDF)</ref><ref name="LightTransPancake">(2024-07-05) Catadioptric Imaging System Based on Pancake Lenses - LightTrans</ref><ref name="RedditPancake">What are pancake lenses? - Reddit (Sept 13, 2022)</ref><ref name="LimbakReddit">(2022-05-27) LIMBAK's freeform microlens array is thinner and much more efficient than pancake lenses for VR and MR - Reddit</ref>. Novel freeform MLA designs claim much higher efficiencies (for example 80%) while also achieving thin profiles<ref name="LimbakReddit"/>. | ||
*'''[[Waveguide]] Coupling:''' In AR [[waveguide]] displays, microlens arrays can potentially serve as [[in-coupling]] and [[out-coupling]] elements, efficiently directing light from miniature projectors (like [[microLED]] arrays) into thin waveguides and then out toward the user's eyes. Research suggests pixel-level collimating microlenses could narrow microLED emission angles for better waveguide injection, though adding fabrication complexity<ref name="WaveguideReview">(2023) Waveguide-based augmented reality displays: perspectives and challenges - eLight (SpringerOpen)</ref><ref name="Article1Ref_Kress2019">Kress, B. C., & Meyrueis, P. (2019). Applied digital optics: from micro-optics to nanophotonics. John Wiley & Sons.</ref>. | *'''[[Waveguide]] Coupling:''' In AR [[waveguide]] displays, microlens arrays can potentially serve as [[in-coupling]] and [[out-coupling]] elements, efficiently directing light from miniature projectors (like [[microLED]] arrays) into thin waveguides and then out toward the user's eyes. Research suggests pixel-level collimating microlenses could narrow microLED emission angles for better waveguide injection, though adding fabrication complexity<ref name="WaveguideReview">(2023) Waveguide-based augmented reality displays: perspectives and challenges - eLight (SpringerOpen)</ref><ref name="Article1Ref_Kress2019">Kress, B. C., & Meyrueis, P. (2019). Applied digital optics: from micro-optics to nanophotonics. John Wiley & Sons.</ref>. | ||
===Sensing and Tracking=== | ===Sensing and Tracking=== | ||
*'''[[Eye Tracking]] and Dynamic Focus:''' Tunable microlens arrays, such as those based on [[electrowetting]] liquid lenses or [[liquid crystal lens]]es, can be integrated into HMDs. Combined with eye-tracking cameras, these systems could dynamically adjust the focus of the displayed image or specific lenslets to match the user's gaze depth in real-time<ref name="ElectrowettingLFD"/>. This could enhance the realism of light field displays, provide variable focus capabilities<ref name="Article1Ref_Muenster2019">Muenster, R., Jaeger, G., Hubner, M., Stetter, M., & Stilla, U. (2019). Liquid crystal tunable microlens array for augmented reality displays. In Digital Optical Technologies 2019 (Vol. 11062, p. 110620J). International Society for Optics and Photonics.</ref>, potentially correct for individual user refractive errors, or even simulate depth-of-field effects by selectively blurring parts of the image<ref name="PatentCNBlur">(B) CN107942517B: VR head-mounted display device with function of relieving visual fatigue based on liquid crystal microlens array - Google Patents</ref>. MLAs are also used in some [[eye tracking]] systems to help collect and direct light for imaging the user's pupil, enabling features like [[foveated rendering]]<ref name="Article1Ref_Kim2019">Kim, J., Jeong, Y., Stengel, M., Akşit, K., Albert, R., Boudaoud, B., ... & Luebke, D. (2019). Foveated AR: dynamically-foveated augmented reality display. ACM Transactions on Graphics, 38(4), 1-15.</ref>. | *'''[[Eye Tracking]] and Dynamic Focus:''' Tunable microlens arrays, such as those based on [[electrowetting]] liquid lenses or [[liquid crystal lens]]es, can be integrated into HMDs. Combined with eye-tracking cameras, these systems could dynamically adjust the focus of the displayed image or specific lenslets to match the user's gaze depth in real-time<ref name="ElectrowettingLFD"/>. This could enhance the realism of light field displays, provide variable focus capabilities<ref name="Article1Ref_Muenster2019">Muenster, R., Jaeger, G., Hubner, M., Stetter, M., & Stilla, U. (2019). Liquid crystal tunable microlens array for augmented reality displays. In Digital Optical Technologies 2019 (Vol. 11062, p. 110620J). International Society for Optics and Photonics.</ref>, potentially correct for individual user refractive errors, or even simulate depth-of-field effects by selectively blurring parts of the image<ref name="PatentCNBlur">(B) CN107942517B: VR head-mounted display device with function of relieving visual fatigue based on liquid crystal microlens array - Google Patents</ref>. MLAs are also used in some [[eye tracking]] systems to help collect and direct light for imaging the user's pupil, enabling features like [[foveated rendering]]<ref name="Article1Ref_Kim2019">Kim, J., Jeong, Y., Stengel, M., Akşit, K., Albert, R., Boudaoud, B., ... & Luebke, D. (2019). Foveated AR: dynamically-foveated augmented reality display. ACM Transactions on Graphics, 38(4), 1-15.</ref>. | ||
*'''Depth Sensing ([[Time-of-Flight]], [[Structured Light]]):''' MLAs play a role in the projection modules of active depth sensing systems. In [[Time-of-Flight]] (ToF) sensors, MLAs can shape and homogenize the output beam from illumination sources like [[VCSEL]] arrays, projecting a well-defined pattern ( | *'''Depth Sensing ([[Time-of-Flight]], [[Structured Light]]):''' MLAs play a role in the projection modules of active depth sensing systems. In [[Time-of-Flight]] (ToF) sensors, MLAs can shape and homogenize the output beam from illumination sources like [[VCSEL]] arrays, projecting a well-defined pattern (for example a "top-hat" profile) of infrared light onto the scene<ref name="PatentlyApple"/><ref name="BrightView"/>. In [[Structured Light]] systems (like those used in some versions of Apple's [[Face ID]]), MLAs can project a complex pattern of spots or lines onto the target<ref name="PatentlyApple"/><ref name="TEMICON"/>. The distortion of this pattern as seen by a sensor reveals the 3D shape of the target. These capabilities are essential for environmental mapping, hand tracking, [[gesture recognition]], and object recognition in AR/VR<ref name="PatentlyApple"/><ref name="TEMICON"/>. Some HMD patent designs use multiple MLAs combined with parallax barriers for 3D imaging<ref name="PatentUSMultiMLA">(A1) US20140168783A1: Near-eye microlens array displays - Google Patents</ref>. | ||
*'''[[Wavefront Sensor]]s:''' The [[Shack–Hartmann wavefront sensor]] uses an MLA placed in front of an [[image sensor]] ([[CCD]] or [[CMOS]]). An incoming optical wavefront is divided by the MLA into multiple beamlets, each focused onto the sensor. Deviations of the spot positions from a reference grid reveal the local slope of the wavefront, allowing its overall shape (including aberrations) to be reconstructed<ref name="RPPhotonics"/><ref name="AvantierIntro"/><ref name="StandardMLA"/><ref name="GDOptics"/>. While primarily used in optical metrology and [[adaptive optics]], this principle could potentially be adapted for HMD calibration or real-time measurement of the eye's aberrations for personalized display correction. | *'''[[Wavefront Sensor]]s:''' The [[Shack–Hartmann wavefront sensor]] uses an MLA placed in front of an [[image sensor]] ([[CCD]] or [[CMOS]]). An incoming optical wavefront is divided by the MLA into multiple beamlets, each focused onto the sensor. Deviations of the spot positions from a reference grid reveal the local slope of the wavefront, allowing its overall shape (including aberrations) to be reconstructed<ref name="RPPhotonics"/><ref name="AvantierIntro"/><ref name="StandardMLA"/><ref name="GDOptics"/>. While primarily used in optical metrology and [[adaptive optics]], this principle could potentially be adapted for HMD calibration or real-time measurement of the eye's aberrations for personalized display correction. | ||
*'''[[Light Field Camera]]s / Imaging Enhancement:''' Placing an MLA in front of an image sensor enables the capture of light field information (intensity and direction of light rays), creating a [[plenoptic camera]]<ref name="RPPhotonics"/><ref name="ShanghaiOptics"/><ref name="OpticaLFD"/>. This allows computational features like post-capture refocusing, depth map extraction, and perspective shifting. Such capabilities could be valuable for outward-facing cameras on AR/VR headsets for improved environmental understanding or [[computational photography]]. Even in conventional cameras, MLAs are often placed directly on CMOS/CCD sensors (one lenslet per pixel) simply to increase [[light collection]] efficiency (the optical fill factor) by funneling more incident light onto the active photosensitive area of each pixel, improving low-light performance and sensitivity<ref name="RPPhotonics"/><ref name="OpticalComponents"/><ref name="AvantierIntro"/><ref name="ApolloOptics"/>. | *'''[[Light Field Camera]]s / Imaging Enhancement:''' Placing an MLA in front of an image sensor enables the capture of light field information (intensity and direction of light rays), creating a [[plenoptic camera]]<ref name="RPPhotonics"/><ref name="ShanghaiOptics"/><ref name="OpticaLFD"/>. This allows computational features like post-capture refocusing, depth map extraction, and perspective shifting. Such capabilities could be valuable for outward-facing cameras on AR/VR headsets for improved environmental understanding or [[computational photography]]. Even in conventional cameras, MLAs are often placed directly on CMOS/CCD sensors (one lenslet per pixel) simply to increase [[light collection]] efficiency (the optical fill factor) by funneling more incident light onto the active photosensitive area of each pixel, improving low-light performance and sensitivity<ref name="RPPhotonics"/><ref name="OpticalComponents"/><ref name="AvantierIntro"/><ref name="ApolloOptics"/>. | ||
Line 65: | Line 65: | ||
*'''Resolution Trade-offs (Spatial vs. Angular):''' In light field display applications, there is a fundamental trade-off: increasing the angular resolution (more views, smoother depth) typically requires allocating more display pixels per lenslet, which reduces the overall spatial resolution (perceived sharpness) of the image, and vice versa<ref name="ResearchGateLFD"/><ref name="TAMULFD"/><ref name="Creal"/>. High-resolution microdisplays are essential. | *'''Resolution Trade-offs (Spatial vs. Angular):''' In light field display applications, there is a fundamental trade-off: increasing the angular resolution (more views, smoother depth) typically requires allocating more display pixels per lenslet, which reduces the overall spatial resolution (perceived sharpness) of the image, and vice versa<ref name="ResearchGateLFD"/><ref name="TAMULFD"/><ref name="Creal"/>. High-resolution microdisplays are essential. | ||
*'''[[Diffraction]] Limits:''' As lenslet sizes shrink, [[diffraction]] effects become more pronounced, potentially limiting the achievable resolution or sharpness (spot size)<ref name="StandardMLA"/> (Implied by size limits). | *'''[[Diffraction]] Limits:''' As lenslet sizes shrink, [[diffraction]] effects become more pronounced, potentially limiting the achievable resolution or sharpness (spot size)<ref name="StandardMLA"/> (Implied by size limits). | ||
*'''[[Chromatic Aberration]]:''' Like single lenses, simple MLA lenslets made from standard materials exhibit chromatic aberration (color fringing), where different wavelengths of light focus at different points<ref name="Article1Ref_Sweeney2018">Sweeney, M. O., & Hoang, A. (2018). Methods and systems for reducing the effects of chromatic aberration in optical systems. U.S. Patent No. 10,120,194.</ref>. This can be particularly noticeable in light field displays or wide FOV systems<ref name="OpticaLFD"/>. Correction requires achromatic lenslet designs ( | *'''[[Chromatic Aberration]]:''' Like single lenses, simple MLA lenslets made from standard materials exhibit chromatic aberration (color fringing), where different wavelengths of light focus at different points<ref name="Article1Ref_Sweeney2018">Sweeney, M. O., & Hoang, A. (2018). Methods and systems for reducing the effects of chromatic aberration in optical systems. U.S. Patent No. 10,120,194.</ref>. This can be particularly noticeable in light field displays or wide FOV systems<ref name="OpticaLFD"/>. Correction requires achromatic lenslet designs (for example using multiple materials or diffractive features) or sophisticated computational correction algorithms<ref name="UltraThinMLA"/>. | ||
*'''[[Stray Light]] and [[Ghost Image]]s:''' Multiple surfaces in an MLA-based optical system can lead to internal reflections, potentially causing [[stray light]] (reducing contrast) or noticeable [[ghost image]]s, similar to issues encountered in multi-element lenses or pancake optics<ref name="PolGrating"/><ref name="ApolloOptics"/> (Mentions reducing glare/ghosting as advantage)<ref name="Article1Ref_Wang2012">Wang, Y., & Ji, H. (2012). Ghost image removal using visibility modulation method for lens-array-based integral imaging displays. Journal of Display Technology, 8(12), 709-714.</ref>. [[Anti-reflection coating]]s and careful optical design are crucial mitigations. | *'''[[Stray Light]] and [[Ghost Image]]s:''' Multiple surfaces in an MLA-based optical system can lead to internal reflections, potentially causing [[stray light]] (reducing contrast) or noticeable [[ghost image]]s, similar to issues encountered in multi-element lenses or pancake optics<ref name="PolGrating"/><ref name="ApolloOptics"/> (Mentions reducing glare/ghosting as advantage)<ref name="Article1Ref_Wang2012">Wang, Y., & Ji, H. (2012). Ghost image removal using visibility modulation method for lens-array-based integral imaging displays. Journal of Display Technology, 8(12), 709-714.</ref>. [[Anti-reflection coating]]s and careful optical design are crucial mitigations. | ||
*'''[[Computational Load]]:''' Rendering content for light field displays requires specialized algorithms (calculating the view for each direction from each lenslet) which can be significantly more computationally demanding than standard stereoscopic rendering<ref name="NvidiaSupp"/><ref name="ResearchGateLFD"/><ref name="TAMULFD"/>. Computational imaging techniques associated with some MLA sensors also require processing power<ref name="UltraThinMLA"/>. | *'''[[Computational Load]]:''' Rendering content for light field displays requires specialized algorithms (calculating the view for each direction from each lenslet) which can be significantly more computationally demanding than standard stereoscopic rendering<ref name="NvidiaSupp"/><ref name="ResearchGateLFD"/><ref name="TAMULFD"/>. Computational imaging techniques associated with some MLA sensors also require processing power<ref name="UltraThinMLA"/>. | ||
Line 73: | Line 73: | ||
==Future Directions== | ==Future Directions== | ||
The development of microlens arrays for VR/AR is an active area of research and innovation: | The development of microlens arrays for VR/AR is an active area of research and innovation: | ||
*'''Advanced Manufacturing:''' Continued improvements in fabrication techniques ( | *'''Advanced Manufacturing:''' Continued improvements in fabrication techniques (for example wafer-level optics, new materials, higher precision molding and lithography) are needed to enable cost-effective mass production of complex, high-performance MLAs<ref name="TEMICON"/><ref name="GDOptics"/><ref name="ReviewFab"/>. | ||
*'''Freeform and Heterogeneous Designs:''' Further exploration of freeform surfaces and heterogeneous lenslet optimization to simultaneously maximize FOV, eye box, resolution, and efficiency while minimizing aberrations and form factor<ref name="LimbakReddit"/><ref name="ThinVR"/><ref name="AzumaThinVR"/>. | *'''Freeform and Heterogeneous Designs:''' Further exploration of freeform surfaces and heterogeneous lenslet optimization to simultaneously maximize FOV, eye box, resolution, and efficiency while minimizing aberrations and form factor<ref name="LimbakReddit"/><ref name="ThinVR"/><ref name="AzumaThinVR"/>. | ||
*'''[[Metalens]] Arrays:''' Research into [[metasurface]]-based MLAs ([[metalenses]]) which control light using subwavelength nanostructures. These offer potential for ultra-thin, flat optics with novel functionalities like reduced chromatic aberration<ref name="Article1Ref_Capasso2018">Capasso, F. (2018). The future of optics: metalenses. American Scientist, 106(1), 20-25.</ref><ref name="UltraThinMLA"/>. | *'''[[Metalens]] Arrays:''' Research into [[metasurface]]-based MLAs ([[metalenses]]) which control light using subwavelength nanostructures. These offer potential for ultra-thin, flat optics with novel functionalities like reduced chromatic aberration<ref name="Article1Ref_Capasso2018">Capasso, F. (2018). The future of optics: metalenses. American Scientist, 106(1), 20-25.</ref><ref name="UltraThinMLA"/>. | ||
*'''[[Light Field]] Display Enhancement:''' Overcoming the spatial/angular resolution trade-off, reducing computational requirements, and improving image quality ( | *'''[[Light Field]] Display Enhancement:''' Overcoming the spatial/angular resolution trade-off, reducing computational requirements, and improving image quality (for example reducing chromatic aliasing) for light field displays<ref name="Creal"/><ref name="OpticaLFD"/>. | ||
*'''Tunable and Active MLAs:''' Development and integration of dynamic MLAs using technologies like liquid crystals or [[electrowetting]] for real-time focus adjustment, aberration correction, or gaze-contingent rendering<ref name="ElectrowettingLFD"/><ref name="Article1Ref_Muenster2019"/>. | *'''Tunable and Active MLAs:''' Development and integration of dynamic MLAs using technologies like liquid crystals or [[electrowetting]] for real-time focus adjustment, aberration correction, or gaze-contingent rendering<ref name="ElectrowettingLFD"/><ref name="Article1Ref_Muenster2019"/>. | ||
*'''Hybrid Optics:''' Combining MLAs with other advanced optical technologies like metasurfaces, [[diffractive optics]] (DOEs), [[holographic optical element]]s (HOEs), or [[polarization grating]]s to achieve novel functionalities or enhanced performance<ref name="UltraThinMLA"/><ref name="ReviewFab"/><ref name="PolGrating"/><ref name="Article1Ref_Lee2017">Lee, B., Hong, J., Yoo, D., Shin, C., & Lee, S. (2017). Hybrid optical systems for see-through head-mounted displays. In Digital Optical Technologies 2017 (Vol. 10335, p. 103350I). International Society for Optics and Photonics.</ref>. | *'''Hybrid Optics:''' Combining MLAs with other advanced optical technologies like metasurfaces, [[diffractive optics]] (DOEs), [[holographic optical element]]s (HOEs), or [[polarization grating]]s to achieve novel functionalities or enhanced performance<ref name="UltraThinMLA"/><ref name="ReviewFab"/><ref name="PolGrating"/><ref name="Article1Ref_Lee2017">Lee, B., Hong, J., Yoo, D., Shin, C., & Lee, S. (2017). Hybrid optical systems for see-through head-mounted displays. In Digital Optical Technologies 2017 (Vol. 10335, p. 103350I). International Society for Optics and Photonics.</ref>. |