Jump to content

Focal surface display: Difference between revisions

Acro (talk | contribs)
Introduction: Link light field and VAC in separate places
No edit summary
 
(2 intermediate revisions by 2 users not shown)
Line 17: Line 17:
There had been previous attempts to solve the vergence-accommodation conflict such as using integral imaging techniques to synthesize [[light field]]s from scene content or displaying multiple focal planes, but these suffered from such problems as low fidelity accommodation cues, low resolution, and low field of view. The focal surface display is expected to generate high fidelity accommodation cues using off-the-shelf optical components. The spatial light modulator - placed between the display screen and eyepiece - produces variable focus along the display field of view. <ref name=”2”></ref>
There had been previous attempts to solve the vergence-accommodation conflict such as using integral imaging techniques to synthesize [[light field]]s from scene content or displaying multiple focal planes, but these suffered from such problems as low fidelity accommodation cues, low resolution, and low field of view. The focal surface display is expected to generate high fidelity accommodation cues using off-the-shelf optical components. The spatial light modulator - placed between the display screen and eyepiece - produces variable focus along the display field of view. <ref name=”2”></ref>


Currently, there is no planned commercial release for the focal surface display technology. <ref name=”6”></ref>
As of 2017, there was no known planned commercial release for focal surface display technology. <ref name=”6”></ref>


==Development and announcement of the focal surface display==
==Development and announcement of the focal surface display==
The Oculus focal surface display project was a long time in development. According to a research scientist at Oculus Research, “manipulating focus isn’t quite the same as modulating intensity or other more usual tasks in computational displays, and it took us a while to get to the correct mathematical formulation that finally brought everything together. Our overall motivation was to do things the ‘right’ way—solid engineering combined with the math and algorithms to back it up. We weren’t going to be happy with something that only worked on paper or a hacked together prototype that didn’t have any rigorous explanation of why it worked.” <ref name=”1”></ref>
The Oculus focal surface display project was a long time in development. According to a research scientist at Oculus Research, “manipulating focus isn’t quite the same as modulating intensity or other more usual tasks in computational displays, and it took us a while to get to the correct mathematical formulation that finally brought everything together. Our overall motivation was to do things the "right" way: solid engineering combined with the math and algorithms to back it up. We weren’t going to be happy with something that only worked on paper or a hacked together prototype that didn’t have any rigorous explanation of why it worked.” <ref name=”1”></ref>


On May, 2017, the VR and AR R&D division of Oculus - Oculus Research - announced the new display technology. During the same period, they published a research paper about their focal surface display, authored by Oculus scientists Nathan Matsuda, Alexander Fix, and [[Douglas Lanman]]. The research was also presented at the SIGGRAPH conference in July, 2017. <ref name=”7”>Lang, B. (2017). Oculus Research reveals “groundbreaking” focal surface display. Retrieved from https://www.roadtovr.com/oculus-research-demonstrate-groundbreaking-focal-surface-display/</ref>
On May, 2017, the VR and AR R&D division of Oculus - Oculus Research - announced the new display technology. During the same period, they published a research paper about their focal surface display, authored by Oculus scientists Nathan Matsuda, Alexander Fix, and [[Douglas Lanman]]. The research was also presented at the SIGGRAPH conference in July, 2017. <ref name=”7”>Lang, B. (2017). Oculus Research reveals “groundbreaking” focal surface display. Retrieved from https://www.roadtovr.com/oculus-research-demonstrate-groundbreaking-focal-surface-display/</ref>
Line 33: Line 33:
Different HMD architectures have been proposed to solve this problem and depict correct or near-correct retinal blur (Figure 3). The focal surface displays augment regular HMDs with a spatial light modulator that “acts as a dynamic freeform lens, shaping synthesized focal surfaces to conform to the virtual scene geometry.” Furthermore, Oculus Research has introduced “a framework to decompose target focal stacks and depth maps into one or more pairs of piecewise smooth focal surfaces and underlying display images,” building on “recent developments in "optimized blending" to implement a multifocal display that allows the accurate depiction of occluding, semi-transparent, and reflective objects.” <ref name=”5”></ref>
Different HMD architectures have been proposed to solve this problem and depict correct or near-correct retinal blur (Figure 3). The focal surface displays augment regular HMDs with a spatial light modulator that “acts as a dynamic freeform lens, shaping synthesized focal surfaces to conform to the virtual scene geometry.” Furthermore, Oculus Research has introduced “a framework to decompose target focal stacks and depth maps into one or more pairs of piecewise smooth focal surfaces and underlying display images,” building on “recent developments in "optimized blending" to implement a multifocal display that allows the accurate depiction of occluding, semi-transparent, and reflective objects.” <ref name=”5”></ref>


Contrary to multifocal displays with fixed focal surfaces, the phase modulator shapes focal surfaces to conform to the scene geometry. A set of color images are produced and mapped onto a corresponding focal surface (Figure 4), with visual appearance being rendered by “tracing rays from the eye through the optics, and accumulating the color values for each focal surface.” Furthermore, Matsuda ''et al''. (2017) explain that their “algorithm sequentially solves for first the focal surfaces, given the target depth map, and then the color images—full joint optimization is left for future work. Focal surfaces are adapted by nonlinear least squares optimization, minimizing the distance between the nearest depicted surface and the scene geometry. The color images, paired with each surface, are determined by linear least squares methods.” <ref name=”5”></ref>
Contrary to multifocal displays with fixed focal surfaces, the phase modulator shapes focal surfaces to conform to the scene geometry. A set of color images are produced and mapped onto a corresponding focal surface (Figure 4), with visual appearance being rendered by “tracing rays from the eye through the optics, and accumulating the color values for each focal surface.” Furthermore, Matsuda ''et al''. (2017) explain that their “algorithm sequentially solves for first the focal surfaces, given the target depth map, and then the color images, full joint optimization is left for future work. Focal surfaces are adapted by nonlinear least squares optimization, minimizing the distance between the nearest depicted surface and the scene geometry. The color images, paired with each surface, are determined by linear least squares methods.” <ref name=”5”></ref>


The focal surface display research team demonstrated that the technology depicts more accurate retinal blur, with lesser multiplexed images, with high resolution being maintained throughout the user’s accommodative range. <ref name=”5”></ref>
The focal surface display research team demonstrated that the technology depicts more accurate retinal blur, with lesser multiplexed images, with high resolution being maintained throughout the user’s accommodative range. <ref name=”5”></ref>