<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://vrarwiki.com/index.php?action=history&amp;feed=atom&amp;title=Light_field</id>
	<title>Light field - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://vrarwiki.com/index.php?action=history&amp;feed=atom&amp;title=Light_field"/>
	<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;action=history"/>
	<updated>2026-04-18T13:59:01Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=36145&amp;oldid=prev</id>
		<title>RealEditor at 19:12, 2 July 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=36145&amp;oldid=prev"/>
		<updated>2025-07-02T19:12:41Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 19:12, 2 July 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot;&gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;{{see also|Terms|Technical Terms}}&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A &amp;#039;&amp;#039;&amp;#039;light field&amp;#039;&amp;#039;&amp;#039; (also spelled &amp;#039;&amp;#039;&amp;#039;lightfield&amp;#039;&amp;#039;&amp;#039;) is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].&amp;lt;ref name=&amp;quot;LevoyHanrahan1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. &amp;#039;&amp;#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &amp;#039;96&amp;#039;&amp;#039;, 31-42.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Gortler1996&amp;quot;&amp;gt;Gortler, S. J., Grzeszczuk, R., Szeliski, R., &amp;amp; Cohen, M. F. (1996). The Lumigraph. &amp;#039;&amp;#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &amp;#039;96&amp;#039;&amp;#039;, 43-54.&amp;lt;/ref&amp;gt; Essentially, it&amp;#039;s a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;Ng2005&amp;quot;&amp;gt;Ng, R. (2005). Digital Light Field Photography. &amp;#039;&amp;#039;Ph.D. Thesis, Stanford University&amp;#039;&amp;#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2013&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. &amp;#039;&amp;#039;ACM SIGGRAPH 2013 Talks&amp;#039;&amp;#039;, 1-1.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A &amp;#039;&amp;#039;&amp;#039;light field&amp;#039;&amp;#039;&amp;#039; (also spelled &amp;#039;&amp;#039;&amp;#039;lightfield&amp;#039;&amp;#039;&amp;#039;) is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].&amp;lt;ref name=&amp;quot;LevoyHanrahan1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. &amp;#039;&amp;#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &amp;#039;96&amp;#039;&amp;#039;, 31-42.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Gortler1996&amp;quot;&amp;gt;Gortler, S. J., Grzeszczuk, R., Szeliski, R., &amp;amp; Cohen, M. F. (1996). The Lumigraph. &amp;#039;&amp;#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &amp;#039;96&amp;#039;&amp;#039;, 43-54.&amp;lt;/ref&amp;gt; Essentially, it&amp;#039;s a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;Ng2005&amp;quot;&amp;gt;Ng, R. (2005). Digital Light Field Photography. &amp;#039;&amp;#039;Ph.D. Thesis, Stanford University&amp;#039;&amp;#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2013&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. &amp;#039;&amp;#039;ACM SIGGRAPH 2013 Talks&amp;#039;&amp;#039;, 1-1.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=36122&amp;oldid=prev</id>
		<title>RealEditor: /* Commercial Examples and Prototypes */ add link</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=36122&amp;oldid=prev"/>
		<updated>2025-07-02T18:56:57Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Commercial Examples and Prototypes: &lt;/span&gt; add link&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 18:56, 2 July 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l89&quot;&gt;Line 89:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 89:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* [[Magic Leap]]: Their [[spatial computing]] headsets incorporate light field principles to blend virtual and real content.&amp;lt;ref name=&amp;quot;Art1Kress&amp;quot;&amp;gt;Kress, B. C., &amp;amp; Chatterjee, I. (2020). &amp;quot;Waveguide combiners for mixed reality headsets: a nanophotonics design perspective.&amp;quot; Nanophotonics, 9(11), 3653-3667.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Art2FXGuide&amp;quot;&amp;gt;[https://www.fxguide.com/fxfeatured/light-fields-the-future-of-vr-ar-mr/ fxguide: Light Fields - The Future of VR-AR-MR]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* [[Magic Leap]]: Their [[spatial computing]] headsets incorporate light field principles to blend virtual and real content.&amp;lt;ref name=&amp;quot;Art1Kress&amp;quot;&amp;gt;Kress, B. C., &amp;amp; Chatterjee, I. (2020). &amp;quot;Waveguide combiners for mixed reality headsets: a nanophotonics design perspective.&amp;quot; Nanophotonics, 9(11), 3653-3667.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Art2FXGuide&amp;quot;&amp;gt;[https://www.fxguide.com/fxfeatured/light-fields-the-future-of-vr-ar-mr/ fxguide: Light Fields - The Future of VR-AR-MR]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* [[Leia Inc.]]: Creates light field displays for mobile devices.&amp;lt;ref name=&amp;quot;Art1Fattal&amp;quot;&amp;gt;Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., &amp;amp; Beausoleil, R. G. (2013). &amp;quot;A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.&amp;quot; Nature, 495(7441), 348-351.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* [[Leia Inc.]]: Creates light field displays for mobile devices.&amp;lt;ref name=&amp;quot;Art1Fattal&amp;quot;&amp;gt;Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., &amp;amp; Beausoleil, R. G. (2013). &amp;quot;A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.&amp;quot; Nature, 495(7441), 348-351.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* [[CREAL]]: Swiss startup developing near-eye light field displays specifically targeting the VAC issue in AR/VR.&amp;lt;ref name=&quot;Art2CrealRoad&quot;&amp;gt;[https://www.roadtovr.com/creal-light-field-display-new-immersion-ar/ Road to VR: Hands-on: CREAL&#039;s Light-field Display Brings a New Layer of Immersion to AR]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art2CrealSite&quot;&amp;gt;[https://creal.com/ CREAL: Light-field Display Technology]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* [[CREAL]]: Swiss startup developing near-eye light field displays specifically targeting the &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;VAC&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]] &lt;/ins&gt;issue in AR/VR.&amp;lt;ref name=&quot;Art2CrealRoad&quot;&amp;gt;[https://www.roadtovr.com/creal-light-field-display-new-immersion-ar/ Road to VR: Hands-on: CREAL&#039;s Light-field Display Brings a New Layer of Immersion to AR]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art2CrealSite&quot;&amp;gt;[https://creal.com/ CREAL: Light-field Display Technology]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Light Field Lab: Developing large-scale holographic light field displays.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Light Field Lab: Developing large-scale holographic light field displays.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=36121&amp;oldid=prev</id>
		<title>RealEditor: /* The Plenoptic Function */ add detail</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=36121&amp;oldid=prev"/>
		<updated>2025-07-02T18:56:29Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;The Plenoptic Function: &lt;/span&gt; add detail&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 18:56, 2 July 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l13&quot;&gt;Line 13:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 13:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===The Plenoptic Function===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===The Plenoptic Function===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The most &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;complete &lt;/del&gt;representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).&amp;lt;ref name=&quot;AdelsonBergen1991&quot;/&amp;gt; For many applications, this is overly complex and contains redundant information (for example light doesn&#039;t typically change along a straight ray in free space, radiance invariance, unless wavelength or time are critical).&amp;lt;ref name=&quot;WikiLF&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The most &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;developed &lt;/ins&gt;representation is the 7D plenoptic function &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;which tracks a lightfield over time&lt;/ins&gt;, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).&amp;lt;ref name=&quot;AdelsonBergen1991&quot;/&amp;gt; For many applications, this is overly complex and contains redundant information (for example light doesn&#039;t typically change along a straight ray in free space, radiance invariance, unless wavelength or time are critical).&amp;lt;ref name=&quot;WikiLF&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Simplified Light Fields===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Simplified Light Fields===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=35442&amp;oldid=prev</id>
		<title>Xinreality at 21:26, 7 May 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=35442&amp;oldid=prev"/>
		<updated>2025-05-07T21:26:42Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 21:26, 7 May 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l13&quot;&gt;Line 13:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 13:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===The Plenoptic Function===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===The Plenoptic Function===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).&amp;lt;ref name=&quot;AdelsonBergen1991&quot;/&amp;gt; For many applications, this is overly complex and contains redundant information (for example light doesn&#039;t typically change along a straight ray in free &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;space—radiance invariance—unless &lt;/del&gt;wavelength or time are critical).&amp;lt;ref name=&quot;WikiLF&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).&amp;lt;ref name=&quot;AdelsonBergen1991&quot;/&amp;gt; For many applications, this is overly complex and contains redundant information (for example light doesn&#039;t typically change along a straight ray in free &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;space, radiance invariance, unless &lt;/ins&gt;wavelength or time are critical).&amp;lt;ref name=&quot;WikiLF&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Simplified Light Fields===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Simplified Light Fields===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=34992&amp;oldid=prev</id>
		<title>Xinreality at 08:32, 3 May 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=34992&amp;oldid=prev"/>
		<updated>2025-05-03T08:32:28Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 08:32, 3 May 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l43&quot;&gt;Line 43:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 43:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Interpolation and View Synthesis:&amp;#039;&amp;#039;&amp;#039; A key advantage is generating novel viewpoints not explicitly captured. This involves interpolating the 4D light field data to estimate the scene&amp;#039;s appearance from arbitrary positions and angles.&amp;lt;ref name=&amp;quot;Art1Kalantari&amp;quot;&amp;gt;Kalantari, N. K., Wang, T. C., &amp;amp; Ramamoorthi, R. (2016). &amp;quot;Learning-based view synthesis for light field cameras.&amp;quot; ACM Transactions on Graphics, 35(6), 193.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Interpolation and View Synthesis:&amp;#039;&amp;#039;&amp;#039; A key advantage is generating novel viewpoints not explicitly captured. This involves interpolating the 4D light field data to estimate the scene&amp;#039;s appearance from arbitrary positions and angles.&amp;lt;ref name=&amp;quot;Art1Kalantari&amp;quot;&amp;gt;Kalantari, N. K., Wang, T. C., &amp;amp; Ramamoorthi, R. (2016). &amp;quot;Learning-based view synthesis for light field cameras.&amp;quot; ACM Transactions on Graphics, 35(6), 193.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;[[Depth Estimation]]:&amp;#039;&amp;#039;&amp;#039; The angular variation of light rays encodes depth information. Various algorithms can extract depth maps, valuable for effects like synthetic [[depth of field]] and for AR interactions.&amp;lt;ref name=&amp;quot;Art1Tao&amp;quot;&amp;gt;Tao, M. W., Hadap, S., Malik, J., &amp;amp; Ramamoorthi, R. (2013). &amp;quot;Depth from combining defocus and correspondence using light-field cameras.&amp;quot; Proceedings of the IEEE International Conference on Computer Vision, 673-680.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;[[Depth Estimation]]:&amp;#039;&amp;#039;&amp;#039; The angular variation of light rays encodes depth information. Various algorithms can extract depth maps, valuable for effects like synthetic [[depth of field]] and for AR interactions.&amp;lt;ref name=&amp;quot;Art1Tao&amp;quot;&amp;gt;Tao, M. W., Hadap, S., Malik, J., &amp;amp; Ramamoorthi, R. (2013). &amp;quot;Depth from combining defocus and correspondence using light-field cameras.&amp;quot; Proceedings of the IEEE International Conference on Computer Vision, 673-680.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Compression:&#039;&#039;&#039; Light field datasets are massive. Efficient compression is vital for storage and transmission, especially for mobile VR/AR and streaming. Techniques often adapt existing video codecs (like VP9) or use specialized approaches.&amp;lt;ref name=&quot;Art1Viola&quot;&amp;gt;Viola, I., Rerabek, M., &amp;amp; Ebrahimi, T. (2017). &quot;Comparison and evaluation of light field image coding approaches.&quot; IEEE Journal of Selected Topics in Signal Processing, 11(7), 1092-1106.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art2AugPerc&quot;&amp;gt;[https://augmentedperception.github.io/welcome-to-lightfields/ Augmented Perception: Welcome to Light Fields]&amp;lt;/ref&amp;gt; Standards bodies like JPEG Pleno and MPEG Immersive Video are developing formats for light field data.&amp;lt;ref name=&quot;Art4MMCommSoc&quot;&amp;gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[&lt;/del&gt;https://mmc.committees.comsoc.org/files/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2017&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;11&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;MMTC&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Review&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Letter&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Vol&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;8&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;No&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Nov-2017&lt;/del&gt;.pdf &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;IEEE ComSoc MMTC Review Letter, Vol. 8, No&lt;/del&gt;. &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2, Nov 2017]&lt;/del&gt;&amp;lt;/ref&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;gt; &amp;lt;!-- Rough citation combining refs from Art 4 --&lt;/del&gt;&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Compression:&#039;&#039;&#039; Light field datasets are massive. Efficient compression is vital for storage and transmission, especially for mobile VR/AR and streaming. Techniques often adapt existing video codecs (like VP9) or use specialized approaches.&amp;lt;ref name=&quot;Art1Viola&quot;&amp;gt;Viola, I., Rerabek, M., &amp;amp; Ebrahimi, T. (2017). &quot;Comparison and evaluation of light field image coding approaches.&quot; IEEE Journal of Selected Topics in Signal Processing, 11(7), 1092-1106.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art2AugPerc&quot;&amp;gt;[https://augmentedperception.github.io/welcome-to-lightfields/ Augmented Perception: Welcome to Light Fields]&amp;lt;/ref&amp;gt; Standards bodies like JPEG Pleno and MPEG Immersive Video are developing formats for light field data.&amp;lt;ref name=&quot;Art4MMCommSoc&quot;&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;IEEE Communications Society – Multimedia Communications Technical Committee. &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&#039;&#039;MMTC Communications – Review, Vol. 8 (No. 1), February 2017&#039;&#039;. &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;PDF. &lt;/ins&gt;https://mmc.committees.comsoc.org/files/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2016&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;04&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;IEEE&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ComSoc&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;MMTC&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Comm&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Review&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Feb&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2017&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Issue&lt;/ins&gt;.pdf  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;(Accessed 3 May 2025)&lt;/ins&gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Light Field Rendering and Display==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Light Field Rendering and Display==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l65&quot;&gt;Line 65:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 70:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Holographic Displays====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Holographic Displays====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Holography|Holographic]] displays reconstruct the light wavefront itself using [[spatial light modulator]]s (SLMs) to control the phase or amplitude of light. These can, in theory, perfectly reproduce the light field of a scene, offering continuous focus cues.&amp;lt;ref name=&quot;Art1Li&quot;&amp;gt;Li, G., Lee, D., Jeong, Y., Cho, J., &amp;amp; Lee, B. (2016). &quot;Holographic display for see-through augmented reality using mirror-lens holographic optical element.&quot; Optics Letters, 41(11), 2486-2489.&amp;lt;/ref&amp;gt; Research includes using [[Holographic Optical Elements (HOEs)]] and [[metasurface]]s for compact designs, like Nvidia&#039;s Holographic Glasses prototype.&amp;lt;ref name=&quot;Art4NvidiaDev&quot;&amp;gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[&lt;/del&gt;https://developer.nvidia.com/blog/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;prescription&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;holographic&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vr&lt;/del&gt;-glasses-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;research&lt;/del&gt;/ &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Nvidia Developer Blog: Holographic Glasses Research]&lt;/del&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Holography|Holographic]] displays reconstruct the light wavefront itself using [[spatial light modulator]]s (SLMs) to control the phase or amplitude of light. These can, in theory, perfectly reproduce the light field of a scene, offering continuous focus cues.&amp;lt;ref name=&quot;Art1Li&quot;&amp;gt;Li, G., Lee, D., Jeong, Y., Cho, J., &amp;amp; Lee, B. (2016). &quot;Holographic display for see-through augmented reality using mirror-lens holographic optical element.&quot; Optics Letters, 41(11), 2486-2489.&amp;lt;/ref&amp;gt; Research includes using [[Holographic Optical Elements (HOEs)]] and [[metasurface]]s for compact designs, like Nvidia&#039;s Holographic Glasses prototype.&amp;lt;ref name=&quot;Art4NvidiaDev&quot;&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Kim, J. (2024). “Developing Smaller, Lighter Extended Reality Glasses Using AI.” &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&#039;&#039;NVIDIA Technical Blog&#039;&#039;, 14 June 2024. &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;https://developer.nvidia.com/blog/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;developing&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;smaller&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;lighter-extended-reality&lt;/ins&gt;-glasses-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;using-ai&lt;/ins&gt;/  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;(Accessed 3 May 2025).&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Compressive/Tensor Displays====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Compressive/Tensor Displays====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l89&quot;&gt;Line 89:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 99:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Enhanced Visual Fidelity and View-Dependent Effects:&amp;#039;&amp;#039;&amp;#039; Light fields capture and reproduce complex light interactions like specular [[highlight]]s, transparency, reflections, and refractions more accurately than traditional rendering, enhancing realism.&amp;lt;ref name=&amp;quot;Art1Mildenhall&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Enhanced Visual Fidelity and View-Dependent Effects:&amp;#039;&amp;#039;&amp;#039; Light fields capture and reproduce complex light interactions like specular [[highlight]]s, transparency, reflections, and refractions more accurately than traditional rendering, enhancing realism.&amp;lt;ref name=&amp;quot;Art1Mildenhall&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Realistic Capture for VR/AR Content:&amp;#039;&amp;#039;&amp;#039; Light field cameras capture real-world scenes with richer information than 360° video or basic [[photogrammetry]], preserving subtle lighting and allowing more natural exploration in VR. Systems like [[Google]]&amp;#039;s light field capture rigs and [[Lytro]] Immerge were developed for this.&amp;lt;ref name=&amp;quot;Art2GoogleBlog&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Realistic Capture for VR/AR Content:&amp;#039;&amp;#039;&amp;#039; Light field cameras capture real-world scenes with richer information than 360° video or basic [[photogrammetry]], preserving subtle lighting and allowing more natural exploration in VR. Systems like [[Google]]&amp;#039;s light field capture rigs and [[Lytro]] Immerge were developed for this.&amp;lt;ref name=&amp;quot;Art2GoogleBlog&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[Light Field Passthrough]] for Mixed Reality:&#039;&#039;&#039; An emerging technique for AR/[[Mixed Reality|MR]] headsets where specialized cameras capture the light field of the real world. This allows rendering the outside view with correct depth and perspective for the user&#039;s eyes, enabling seamless blending of virtual objects with reality and minimizing reprojection errors or distortions seen in traditional video passthrough. Meta&#039;s Flamera prototype is a notable example.&amp;lt;ref name=&quot;Art2TeknoAsian&quot;&amp;gt;[https://teknoasian.com/light-field-passthrough-the-bridge-between-reality-and-virtual-worlds/ Tekno Asian: Light Field Passthrough: The Bridge Between Reality and Virtual Worlds]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4Spectrum&quot;&amp;gt;[https://spectrum.ieee.org/meta-flamera IEEE Spectrum: Meta Builds AR Headset With Unrivaled Passthrough]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4DisplayDaily&quot;&amp;gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[&lt;/del&gt;https://&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;www.&lt;/del&gt;displaydaily.com/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;article/display&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;daily/metas&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;perspective&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;correct&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;passthrough&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;mr&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;display Display Daily: Meta’s Perspective&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Correct Passthrough MR Display]&lt;/del&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[Light Field Passthrough]] for Mixed Reality:&#039;&#039;&#039; An emerging technique for AR/[[Mixed Reality|MR]] headsets where specialized cameras capture the light field of the real world. This allows rendering the outside view with correct depth and perspective for the user&#039;s eyes, enabling seamless blending of virtual objects with reality and minimizing reprojection errors or distortions seen in traditional video passthrough. Meta&#039;s Flamera prototype is a notable example.&amp;lt;ref name=&quot;Art2TeknoAsian&quot;&amp;gt;[https://teknoasian.com/light-field-passthrough-the-bridge-between-reality-and-virtual-worlds/ Tekno Asian: Light Field Passthrough: The Bridge Between Reality and Virtual Worlds]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4Spectrum&quot;&amp;gt;[https://spectrum.ieee.org/meta-flamera IEEE Spectrum: Meta Builds AR Headset With Unrivaled Passthrough]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4DisplayDaily&quot;&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Blackwood, S. (2023). “Meta’s Going to SIGGRAPH 2023 and Showing Flamera and Butterscotch VR Technologies.” &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&#039;&#039;Display Daily&#039;&#039;, 4 August 2023. &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;https://displaydaily.com/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;metas-going-to-siggraph-2023&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;and&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;showing&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;flamera&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;and&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;butterscotch&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vr&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;technologies/ &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;(Accessed 3 May 2025).&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;[[Telepresence]] and Remote Collaboration:&amp;#039;&amp;#039;&amp;#039; Realistic capture and display of participants using light fields can significantly enhance the sense of presence in virtual meetings and remote collaboration systems, enabling more natural eye contact and spatial interaction.&amp;lt;ref name=&amp;quot;Art1Orts&amp;quot;&amp;gt;Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P. L., Khamis, S., Dou, M., Tankovich, V., Loop, C., Cai, Q., Chou, P. A., Mennicken, S., Valentin, J., Pradeep, V., Wang, S., Kang, S. B., Kohli, P., Lutchyn, Y., Keskin, C., &amp;amp; Izadi, S. (2016). &amp;quot;Holoportation: Virtual 3D teleportation in real-time.&amp;quot; Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 741-754.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;[[Telepresence]] and Remote Collaboration:&amp;#039;&amp;#039;&amp;#039; Realistic capture and display of participants using light fields can significantly enhance the sense of presence in virtual meetings and remote collaboration systems, enabling more natural eye contact and spatial interaction.&amp;lt;ref name=&amp;quot;Art1Orts&amp;quot;&amp;gt;Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P. L., Khamis, S., Dou, M., Tankovich, V., Loop, C., Cai, Q., Chou, P. A., Mennicken, S., Valentin, J., Pradeep, V., Wang, S., Kang, S. B., Kohli, P., Lutchyn, Y., Keskin, C., &amp;amp; Izadi, S. (2016). &amp;quot;Holoportation: Virtual 3D teleportation in real-time.&amp;quot; Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 741-754.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Post-Capture Refocus and DoF Control:&amp;#039;&amp;#039;&amp;#039; While primarily a photographic benefit, this capability could be used in VR/AR for cinematic effects, accessibility features, or interactive storytelling.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Post-Capture Refocus and DoF Control:&amp;#039;&amp;#039;&amp;#039; While primarily a photographic benefit, this capability could be used in VR/AR for cinematic effects, accessibility features, or interactive storytelling.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l112&quot;&gt;Line 112:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 127:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Neural Radiance Fields (NeRF) and Neural Rendering:&amp;#039;&amp;#039;&amp;#039; These [[machine learning]] techniques are rapidly evolving, offering efficient ways to represent and render complex scenes with view-dependent effects, potentially revolutionizing light field capture and synthesis for VR/AR.&amp;lt;ref name=&amp;quot;Art1Mildenhall&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Neural Radiance Fields (NeRF) and Neural Rendering:&amp;#039;&amp;#039;&amp;#039; These [[machine learning]] techniques are rapidly evolving, offering efficient ways to represent and render complex scenes with view-dependent effects, potentially revolutionizing light field capture and synthesis for VR/AR.&amp;lt;ref name=&amp;quot;Art1Mildenhall&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Eye-Tracked Foveated Light Fields:&#039;&#039;&#039; Combining [[eye tracking]] with light field rendering/display allows concentrating detail and computational resources where the user is looking ([[foveated rendering]]), making real-time performance more feasible.&amp;lt;ref name=&quot;Art1Kaplanyan&quot;&amp;gt;Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., &amp;amp; Rufo, G. (2019). &quot;DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos.&quot; ACM Transactions on Graphics, 38(6), 212.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4NvidiaResearch&quot;&amp;gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[&lt;/del&gt;https://research.nvidia.com/publication/2017-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;11_Foveated&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Light&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;field&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Rendering Nvidia Research: Foveated Light&lt;/del&gt;-field &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Rendering]&lt;/del&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Eye-Tracked Foveated Light Fields:&#039;&#039;&#039; Combining [[eye tracking]] with light field rendering/display allows concentrating detail and computational resources where the user is looking ([[foveated rendering]]), making real-time performance more feasible.&amp;lt;ref name=&quot;Art1Kaplanyan&quot;&amp;gt;Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., &amp;amp; Rufo, G. (2019). &quot;DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos.&quot; ACM Transactions on Graphics, 38(6), 212.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&quot;Art4NvidiaResearch&quot;&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Sun, Q., Huang, F.‑C., Kim, J., et al. (2017). “Perceptually‑Guided Foveation for Light‑Field Displays.” &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&#039;&#039;ACM SIGGRAPH Asia 2017 Technical Papers&#039;&#039;. NVIDIA Research project page. &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;https://research.nvidia.com/publication/2017-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;11_perceptually&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;guided&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;foveation&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;light&lt;/ins&gt;-field&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;-displays &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;(Accessed 3 May 2025).&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Compact Light Field Optics:&amp;#039;&amp;#039;&amp;#039; Development of [[metalenses]], [[diffractive optics]], novel [[waveguide]] designs, and HOEs aims to create thinner, lighter, and more efficient optics for near-eye light field displays suitable for glasses-like AR/VR devices.&amp;lt;ref name=&amp;quot;Art1WangOptics&amp;quot;&amp;gt;Wang, N., Hua, H., &amp;amp; Viegas, D. (2021). &amp;quot;Compact optical see-through head-mounted display with varifocal liquid membrane lens.&amp;quot; Digital Holography and Three-Dimensional Imaging 2021, OSA Technical Digest, DM3B.3.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Art4NvidiaDev&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Compact Light Field Optics:&amp;#039;&amp;#039;&amp;#039; Development of [[metalenses]], [[diffractive optics]], novel [[waveguide]] designs, and HOEs aims to create thinner, lighter, and more efficient optics for near-eye light field displays suitable for glasses-like AR/VR devices.&amp;lt;ref name=&amp;quot;Art1WangOptics&amp;quot;&amp;gt;Wang, N., Hua, H., &amp;amp; Viegas, D. (2021). &amp;quot;Compact optical see-through head-mounted display with varifocal liquid membrane lens.&amp;quot; Digital Holography and Three-Dimensional Imaging 2021, OSA Technical Digest, DM3B.3.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Art4NvidiaDev&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Light Field Video Streaming:&amp;#039;&amp;#039;&amp;#039; Advances in compression and network bandwidth may enable real-time streaming of light field video for immersive communication, entertainment, and training.&amp;lt;ref name=&amp;quot;Art1Gutierrez&amp;quot;&amp;gt;Gutiérrez-Navarro, D., &amp;amp; Pérez-Daniel, K. R. (2022). &amp;quot;Light field video streaming: A review.&amp;quot; IEEE Access, 10, 12345-12367.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Light Field Video Streaming:&amp;#039;&amp;#039;&amp;#039; Advances in compression and network bandwidth may enable real-time streaming of light field video for immersive communication, entertainment, and training.&amp;lt;ref name=&amp;quot;Art1Gutierrez&amp;quot;&amp;gt;Gutiérrez-Navarro, D., &amp;amp; Pérez-Daniel, K. R. (2022). &amp;quot;Light field video streaming: A review.&amp;quot; IEEE Access, 10, 12345-12367.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=34694&amp;oldid=prev</id>
		<title>Xinreality at 14:20, 29 April 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=34694&amp;oldid=prev"/>
		<updated>2025-04-29T14:20:51Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 14:20, 29 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot;&gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A &#039;&#039;&#039;light field&#039;&#039;&#039; (also spelled &#039;&#039;&#039;lightfield&#039;&#039;&#039;) is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].&amp;lt;ref name=&quot;LevoyHanrahan1996&quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. &#039;&#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &#039;96&#039;&#039;, &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;31–42&lt;/del&gt;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Gortler1996&quot;&amp;gt;Gortler, S. J., Grzeszczuk, R., Szeliski, R., &amp;amp; Cohen, M. F. (1996). The Lumigraph. &#039;&#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &#039;96&#039;&#039;, &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;43–54&lt;/del&gt;.&amp;lt;/ref&amp;gt; Essentially, it&#039;s a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].&amp;lt;ref name=&quot;Ng2005&quot;&amp;gt;Ng, R. (2005). Digital Light Field Photography. &#039;&#039;Ph.D. Thesis, Stanford University&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Lanman2013&quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. &#039;&#039;ACM SIGGRAPH 2013 Talks&#039;&#039;, 1-1.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A &#039;&#039;&#039;light field&#039;&#039;&#039; (also spelled &#039;&#039;&#039;lightfield&#039;&#039;&#039;) is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].&amp;lt;ref name=&quot;LevoyHanrahan1996&quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. &#039;&#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &#039;96&#039;&#039;, &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;31-42&lt;/ins&gt;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Gortler1996&quot;&amp;gt;Gortler, S. J., Grzeszczuk, R., Szeliski, R., &amp;amp; Cohen, M. F. (1996). The Lumigraph. &#039;&#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &#039;96&#039;&#039;, &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;43-54&lt;/ins&gt;.&amp;lt;/ref&amp;gt; Essentially, it&#039;s a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].&amp;lt;ref name=&quot;Ng2005&quot;&amp;gt;Ng, R. (2005). Digital Light Field Photography. &#039;&#039;Ph.D. Thesis, Stanford University&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Lanman2013&quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. &#039;&#039;ACM SIGGRAPH 2013 Talks&#039;&#039;, 1-1.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==History==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==History==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The concept of measuring light rays has early roots. [[Michael Faraday]] first speculated in 1846 in his lecture &quot;Thoughts on Ray Vibrations&quot; that light should be understood as a field, similar to the [[magnetic field]] he had studied.&amp;lt;ref name=&quot;Faraday1846&quot;&amp;gt;Faraday, M. (1846). Thoughts on Ray Vibrations. &#039;&#039;Philosophical Magazine&#039;&#039;, S.3, Vol. 28, No. 188.&amp;lt;/ref&amp;gt; The term &quot;light field&quot; (&#039;&#039;svetovoe pole&#039;&#039; in Russian) was more formally defined by [[Andrey Gershun]] in a classic 1936 paper on the radiometric properties of light in three-dimensional space.&amp;lt;ref name=&quot;Gershun1936&quot;&amp;gt;Gershun, A. (1939). The Light Field. &#039;&#039;Journal of Mathematics and Physics&#039;&#039;, 18(1-4), &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;51–151&lt;/del&gt;. (English translation of 1936 Russian paper).&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;WikiLF&quot;&amp;gt;[https://en.wikipedia.org/wiki/Light_field Wikipedia: Light field]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The concept of measuring light rays has early roots. [[Michael Faraday]] first speculated in 1846 in his lecture &quot;Thoughts on Ray Vibrations&quot; that light should be understood as a field, similar to the [[magnetic field]] he had studied.&amp;lt;ref name=&quot;Faraday1846&quot;&amp;gt;Faraday, M. (1846). Thoughts on Ray Vibrations. &#039;&#039;Philosophical Magazine&#039;&#039;, S.3, Vol. 28, No. 188.&amp;lt;/ref&amp;gt; The term &quot;light field&quot; (&#039;&#039;svetovoe pole&#039;&#039; in Russian) was more formally defined by [[Andrey Gershun]] in a classic 1936 paper on the radiometric properties of light in three-dimensional space.&amp;lt;ref name=&quot;Gershun1936&quot;&amp;gt;Gershun, A. (1939). The Light Field. &#039;&#039;Journal of Mathematics and Physics&#039;&#039;, 18(1-4), &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;51-151&lt;/ins&gt;. (English translation of 1936 Russian paper).&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;WikiLF&quot;&amp;gt;[https://en.wikipedia.org/wiki/Light_field Wikipedia: Light field]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the context of [[computer vision]] and graphics, the concept was further developed with the introduction of the 7D [[plenoptic function]] by [[Edward Adelson|Adelson]] and [[James Bergen|Bergen]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In &amp;#039;&amp;#039;Computational Models of Visual Processing&amp;#039;&amp;#039; (pp. 3-20). MIT Press.&amp;lt;/ref&amp;gt; This function describes all possible light rays, parameterized by 3D position (x, y, z), 2D direction (θ, φ), wavelength (λ), and time (t).&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the context of [[computer vision]] and graphics, the concept was further developed with the introduction of the 7D [[plenoptic function]] by [[Edward Adelson|Adelson]] and [[James Bergen|Bergen]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In &amp;#039;&amp;#039;Computational Models of Visual Processing&amp;#039;&amp;#039; (pp. 3-20). MIT Press.&amp;lt;/ref&amp;gt; This function describes all possible light rays, parameterized by 3D position (x, y, z), 2D direction (θ, φ), wavelength (λ), and time (t).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=34639&amp;oldid=prev</id>
		<title>Xinreality: Text replacement - &quot;e.g.,&quot; to &quot;for example&quot;</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=34639&amp;oldid=prev"/>
		<updated>2025-04-29T04:22:48Z</updated>

		<summary type="html">&lt;p&gt;Text replacement - &amp;quot;e.g.,&amp;quot; to &amp;quot;for example&amp;quot;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 04:22, 29 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l13&quot;&gt;Line 13:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 13:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===The Plenoptic Function===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===The Plenoptic Function===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).&amp;lt;ref name=&quot;AdelsonBergen1991&quot;/&amp;gt; For many applications, this is overly complex and contains redundant information (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;light doesn&#039;t typically change along a straight ray in free space—radiance invariance—unless wavelength or time are critical).&amp;lt;ref name=&quot;WikiLF&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The most complete representation is the 7D plenoptic function, P(x, y, z, θ, φ, λ, t), describing the radiance of light at any 3D point (x,y,z), in any direction (θ, φ), for any wavelength (λ), at any time (t).&amp;lt;ref name=&quot;AdelsonBergen1991&quot;/&amp;gt; For many applications, this is overly complex and contains redundant information (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;light doesn&#039;t typically change along a straight ray in free space—radiance invariance—unless wavelength or time are critical).&amp;lt;ref name=&quot;WikiLF&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Simplified Light Fields===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Simplified Light Fields===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l59&quot;&gt;Line 59:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 59:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Integral Imaging Displays====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Integral Imaging Displays====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;These use a [[microlens array]] placed over a high-resolution display panel (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;[[OLED]], [[LCD]]). Each microlens projects pixels underneath it into different directions, creating multiple views of the scene. Densely sampled views approximate a continuous light field, enabling [[autostereoscopic]] viewing.&amp;lt;ref name=&quot;Art1Martinez&quot;&amp;gt;Martinez-Corral, M., &amp;amp; Javidi, B. (2018). &quot;Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems.&quot; Proceedings of the IEEE, 106(5), 891-908.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Jones2007&quot;&amp;gt;Jones, A., McDowall, I., Yamada, H., Bolas, M., &amp;amp; Debevec, P. (2007). Rendering for an interactive 360° light field display. &#039;&#039;ACM Transactions on Graphics (TOG)&#039;&#039;, 26(3), 40-es.&amp;lt;/ref&amp;gt; This is effectively the inverse of a plenoptic camera.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;These use a [[microlens array]] placed over a high-resolution display panel (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;[[OLED]], [[LCD]]). Each microlens projects pixels underneath it into different directions, creating multiple views of the scene. Densely sampled views approximate a continuous light field, enabling [[autostereoscopic]] viewing.&amp;lt;ref name=&quot;Art1Martinez&quot;&amp;gt;Martinez-Corral, M., &amp;amp; Javidi, B. (2018). &quot;Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems.&quot; Proceedings of the IEEE, 106(5), 891-908.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Jones2007&quot;&amp;gt;Jones, A., McDowall, I., Yamada, H., Bolas, M., &amp;amp; Debevec, P. (2007). Rendering for an interactive 360° light field display. &#039;&#039;ACM Transactions on Graphics (TOG)&#039;&#039;, 26(3), 40-es.&amp;lt;/ref&amp;gt; This is effectively the inverse of a plenoptic camera.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Multi-Plane and Varifocal Displays====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Multi-Plane and Varifocal Displays====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l68&quot;&gt;Line 68:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 68:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Compressive/Tensor Displays====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Compressive/Tensor Displays====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;These use multiple layers of modulating panels (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;LCDs) with computational algorithms to sculpt the light passing through them, synthesizing a target light field with relatively thin hardware.&amp;lt;ref name=&quot;Wetzstein2011&quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2011). Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays. &#039;&#039;ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2011&#039;&#039;, 30(4), 95:1-95:12.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4MMCommSoc&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;These use multiple layers of modulating panels (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;LCDs) with computational algorithms to sculpt the light passing through them, synthesizing a target light field with relatively thin hardware.&amp;lt;ref name=&quot;Wetzstein2011&quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2011). Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays. &#039;&#039;ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2011&#039;&#039;, 30(4), 95:1-95:12.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4MMCommSoc&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Projector/Pinlight Arrays====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Projector/Pinlight Arrays====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Systems using arrays of micro-projectors or scanned beams directed onto specialized screens (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;[[lenticular lens|lenticular sheets]]), or near-eye displays using arrays of &quot;pinlights&quot; (point sources imaged through microlenses or pinholes) can also generate light fields.&amp;lt;ref name=&quot;Art4MMCommSoc&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Systems using arrays of micro-projectors or scanned beams directed onto specialized screens (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;[[lenticular lens|lenticular sheets]]), or near-eye displays using arrays of &quot;pinlights&quot; (point sources imaged through microlenses or pinholes) can also generate light fields.&amp;lt;ref name=&quot;Art4MMCommSoc&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Commercial Examples and Prototypes====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Commercial Examples and Prototypes====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l97&quot;&gt;Line 97:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 97:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Enables post-capture refocusing and depth of field adjustments (primarily capture advantage).&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Enables post-capture refocusing and depth of field adjustments (primarily capture advantage).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Potential to significantly reduce or eliminate the vergence-accommodation conflict in HMDs, increasing comfort.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Potential to significantly reduce or eliminate the vergence-accommodation conflict in HMDs, increasing comfort.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Captures rich scene information useful for various computational photography and computer vision tasks (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;depth estimation).&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Captures rich scene information useful for various computational photography and computer vision tasks (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;depth estimation).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Enables more seamless integration of virtual elements in AR/MR via techniques like light field passthrough.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Enables more seamless integration of virtual elements in AR/MR via techniques like light field passthrough.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l104&quot;&gt;Line 104:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 104:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Computational Complexity:&amp;#039;&amp;#039;&amp;#039; Processing and rendering light fields, especially in real-time for high-resolution VR/AR, requires substantial computational power. Optimization and [[machine learning]] approaches are active research areas.&amp;lt;ref name=&amp;quot;Art1Wang&amp;quot;&amp;gt;Wang, T. C., Efros, A. A., &amp;amp; Ramamoorthi, R. (2021). &amp;quot;Neural rendering and neural light transport for mixed reality.&amp;quot; IEEE Transactions on Visualization and Computer Graphics, 27(5), 2657-2671.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Computational Complexity:&amp;#039;&amp;#039;&amp;#039; Processing and rendering light fields, especially in real-time for high-resolution VR/AR, requires substantial computational power. Optimization and [[machine learning]] approaches are active research areas.&amp;lt;ref name=&amp;quot;Art1Wang&amp;quot;&amp;gt;Wang, T. C., Efros, A. A., &amp;amp; Ramamoorthi, R. (2021). &amp;quot;Neural rendering and neural light transport for mixed reality.&amp;quot; IEEE Transactions on Visualization and Computer Graphics, 27(5), 2657-2671.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Capture Hardware Complexity and Cost:&amp;#039;&amp;#039;&amp;#039; High-quality light field capture systems (plenoptic cameras, large camera arrays) remain complex, expensive, and often limited to controlled environments.&amp;lt;ref name=&amp;quot;Art1Overbeck&amp;quot;&amp;gt;Overbeck, R. S., Erickson, D., Evangelakos, D., Pharr, M., &amp;amp; Debevec, P. (2018). &amp;quot;A system for acquiring, processing, and rendering panoramic light field stills for virtual reality.&amp;quot; ACM Transactions on Graphics, 37(6), 197.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Capture Hardware Complexity and Cost:&amp;#039;&amp;#039;&amp;#039; High-quality light field capture systems (plenoptic cameras, large camera arrays) remain complex, expensive, and often limited to controlled environments.&amp;lt;ref name=&amp;quot;Art1Overbeck&amp;quot;&amp;gt;Overbeck, R. S., Erickson, D., Evangelakos, D., Pharr, M., &amp;amp; Debevec, P. (2018). &amp;quot;A system for acquiring, processing, and rendering panoramic light field stills for virtual reality.&amp;quot; ACM Transactions on Graphics, 37(6), 197.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Display Technology Immaturity and Trade-offs:&#039;&#039;&#039; High-performance light field displays suitable for consumer VR/AR HMDs (high resolution, high brightness, wide [[field of view]] (FoV), large eye-box, low latency, compact form factor) are still largely under development. Current technologies often involve trade-offs, &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;between spatial and angular resolution.&amp;lt;ref name=&quot;Art1Wetzstein&quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). &quot;Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting.&quot; ACM Transactions on Graphics, 31(4), 80.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4MMCommSoc&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Display Technology Immaturity and Trade-offs:&#039;&#039;&#039; High-performance light field displays suitable for consumer VR/AR HMDs (high resolution, high brightness, wide [[field of view]] (FoV), large eye-box, low latency, compact form factor) are still largely under development. Current technologies often involve trade-offs, &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;between spatial and angular resolution.&amp;lt;ref name=&quot;Art1Wetzstein&quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). &quot;Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting.&quot; ACM Transactions on Graphics, 31(4), 80.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;Art4MMCommSoc&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Limited Angular Resolution:&amp;#039;&amp;#039;&amp;#039; Practical systems often have limited angular resolution, which can restrict the range of parallax and the effectiveness in fully resolving VAC.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Limited Angular Resolution:&amp;#039;&amp;#039;&amp;#039; Practical systems often have limited angular resolution, which can restrict the range of parallax and the effectiveness in fully resolving VAC.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Eye-Box Size:&amp;#039;&amp;#039;&amp;#039; Some display approaches (especially holographic and integral imaging) can have a limited viewing zone (eye-box) where the effect is perceived correctly, requiring precise alignment or [[eye tracking]] compensation.&amp;lt;ref name=&amp;quot;Art4MMCommSoc&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Eye-Box Size:&amp;#039;&amp;#039;&amp;#039; Some display approaches (especially holographic and integral imaging) can have a limited viewing zone (eye-box) where the effect is perceived correctly, requiring precise alignment or [[eye tracking]] compensation.&amp;lt;ref name=&amp;quot;Art4MMCommSoc&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=34586&amp;oldid=prev</id>
		<title>Xinreality at 02:14, 27 April 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=34586&amp;oldid=prev"/>
		<updated>2025-04-27T02:14:56Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 02:14, 27 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot;&gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;DISPLAYTITLE:Light Field&lt;/del&gt;}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;see also|Terms|Technical Terms&lt;/ins&gt;}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A &amp;#039;&amp;#039;&amp;#039;light field&amp;#039;&amp;#039;&amp;#039; (also spelled &amp;#039;&amp;#039;&amp;#039;lightfield&amp;#039;&amp;#039;&amp;#039;) is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].&amp;lt;ref name=&amp;quot;LevoyHanrahan1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. &amp;#039;&amp;#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &amp;#039;96&amp;#039;&amp;#039;, 31–42.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Gortler1996&amp;quot;&amp;gt;Gortler, S. J., Grzeszczuk, R., Szeliski, R., &amp;amp; Cohen, M. F. (1996). The Lumigraph. &amp;#039;&amp;#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &amp;#039;96&amp;#039;&amp;#039;, 43–54.&amp;lt;/ref&amp;gt; Essentially, it&amp;#039;s a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;Ng2005&amp;quot;&amp;gt;Ng, R. (2005). Digital Light Field Photography. &amp;#039;&amp;#039;Ph.D. Thesis, Stanford University&amp;#039;&amp;#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2013&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. &amp;#039;&amp;#039;ACM SIGGRAPH 2013 Talks&amp;#039;&amp;#039;, 1-1.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A &amp;#039;&amp;#039;&amp;#039;light field&amp;#039;&amp;#039;&amp;#039; (also spelled &amp;#039;&amp;#039;&amp;#039;lightfield&amp;#039;&amp;#039;&amp;#039;) is a fundamental concept in [[optics]] and [[computer graphics]] that describes the amount of [[light]] traveling in every direction through every point in [[space]].&amp;lt;ref name=&amp;quot;LevoyHanrahan1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. &amp;#039;&amp;#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &amp;#039;96&amp;#039;&amp;#039;, 31–42.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Gortler1996&amp;quot;&amp;gt;Gortler, S. J., Grzeszczuk, R., Szeliski, R., &amp;amp; Cohen, M. F. (1996). The Lumigraph. &amp;#039;&amp;#039;Proceedings of the 23rd annual conference on Computer graphics and interactive techniques - SIGGRAPH &amp;#039;96&amp;#039;&amp;#039;, 43–54.&amp;lt;/ref&amp;gt; Essentially, it&amp;#039;s a vector function that represents the [[radiance]] of light rays at any position and direction within a given volume or area. Understanding and utilizing light fields is crucial for advancing [[virtual reality]] (VR) and [[augmented reality]] (AR) technologies, as it allows for the capture and reproduction of visual scenes with unprecedented realism, including effects like [[parallax]], [[reflection]]s, [[refraction]]s, and [[refocusing]] after capture, while also aiming to solve critical issues like the [[vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;Ng2005&amp;quot;&amp;gt;Ng, R. (2005). Digital Light Field Photography. &amp;#039;&amp;#039;Ph.D. Thesis, Stanford University&amp;#039;&amp;#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2013&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. &amp;#039;&amp;#039;ACM SIGGRAPH 2013 Talks&amp;#039;&amp;#039;, 1-1.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=34585&amp;oldid=prev</id>
		<title>Xinreality at 02:14, 27 April 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=34585&amp;oldid=prev"/>
		<updated>2025-04-27T02:14:41Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 02:14, 27 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l142&quot;&gt;Line 142:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 142:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==References==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==References==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Terms]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Technical Terms]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Optics]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Computer graphics]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Computational photography]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:3D imaging]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Display technology]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Virtual reality]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Augmented reality]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Computer vision]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Emerging technologies]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field&amp;diff=34584&amp;oldid=prev</id>
		<title>Xinreality at 02:12, 27 April 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field&amp;diff=34584&amp;oldid=prev"/>
		<updated>2025-04-27T02:12:56Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 02:12, 27 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l17&quot;&gt;Line 17:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 17:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Simplified Light Fields===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Simplified Light Fields===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For static scenes under constant illumination, the time (t) and wavelength (λ, often simplified to [[RGB]] channels) dependencies can often be dropped. Furthermore, due to the constancy of radiance along a ray in free space, the 3D spatial component can be reduced.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For static scenes under constant illumination, the time (t) and wavelength (λ, often simplified to [[RGB]] channels) dependencies can often be dropped. Furthermore, due to the constancy of radiance along a ray in free space, the 3D spatial component can be reduced.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;5D Light Field:&#039;&#039;&#039; Often represented as L = L(x, y, z, θ, φ).&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;Art1Faraday&quot;&amp;gt;Faraday, M. (1846). &quot;Experimental Researches in Electricity.&quot; Philosophical Transactions of the Royal Society of London, 136, 1-20.&amp;lt;/ref&amp;gt; &amp;lt;!-- Note: Article 1 incorrectly cites Faraday for 5D LF; using the reference marker as requested but the citation itself is questionable for this specific point --&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;5D Light Field:&#039;&#039;&#039; Often represented as L = L(x, y, z, θ, φ).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;4D Light Field:&amp;#039;&amp;#039;&amp;#039; The most common simplification, often called the [[photic field]] or [[lumigraph]] in regions free of occluders.&amp;lt;ref name=&amp;quot;WikiLF&amp;quot;/&amp;gt; It captures radiance along rays without redundant data.&amp;lt;ref name=&amp;quot;LevoyHanrahan1996&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Gortler1996&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;4D Light Field:&amp;#039;&amp;#039;&amp;#039; The most common simplification, often called the [[photic field]] or [[lumigraph]] in regions free of occluders.&amp;lt;ref name=&amp;quot;WikiLF&amp;quot;/&amp;gt; It captures radiance along rays without redundant data.&amp;lt;ref name=&amp;quot;LevoyHanrahan1996&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Gortler1996&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
</feed>