Foveated rendering: Difference between revisions
Appearance
Xinreality (talk | contribs) |
Xinreality (talk | contribs) No edit summary |
||
| (2 intermediate revisions by the same user not shown) | |||
| Line 57: | Line 57: | ||
# '''Display Update:''' The final, composited multi-resolution image is presented to the user. | # '''Display Update:''' The final, composited multi-resolution image is presented to the user. | ||
This entire loop must be completed within the frame budget ( | This entire loop must be completed within the frame budget (for example under 11.1 ms for a 90 Hz display) to ensure a smooth experience. | ||
=== Methods of Quality Reduction === | === Methods of Quality Reduction === | ||
| Line 63: | Line 63: | ||
The term "reducing quality" encompasses several distinct techniques that can be applied to the peripheral regions to save computational power. These methods can be used individually or in combination:<ref name="IntegrativeView" /> | The term "reducing quality" encompasses several distinct techniques that can be applied to the peripheral regions to save computational power. These methods can be used individually or in combination:<ref name="IntegrativeView" /> | ||
* '''Resolution Scaling / Subsampling:''' This is the most common and intuitive method. The peripheral regions are rendered into a smaller off-screen buffer ( | * '''Resolution Scaling / Subsampling:''' This is the most common and intuitive method. The peripheral regions are rendered into a smaller off-screen buffer (for example at half or quarter resolution) and then upscaled to fit the final display. This directly reduces the number of pixels that need to be processed and shaded.<ref name="GazeContingent2D">{{cite web |url=http://stanford.edu/class/ee367/Winter2017/mehra_sankar_ee367_win17_report.pdf |title=Gaze Contingent Foveated Rendering for 2D Displays}}</ref> | ||
* '''Shading Rate Reduction:''' This method focuses on reducing the workload of the [[pixel shader]] (also known as a fragment shader). Instead of executing a complex shading program for every single pixel in the periphery, a single shader result can be applied to a block of multiple pixels. This is the core mechanism behind [[Variable Rate Shading]] (VRS).<ref name="TOYF_Paper" /><ref name="AutoVRSE">{{cite web |url=https://www.autovrse.com/foveated-rendering |title=What is Foveated Rendering? - autovrse}}</ref> | * '''Shading Rate Reduction:''' This method focuses on reducing the workload of the [[pixel shader]] (also known as a fragment shader). Instead of executing a complex shading program for every single pixel in the periphery, a single shader result can be applied to a block of multiple pixels. This is the core mechanism behind [[Variable Rate Shading]] (VRS).<ref name="TOYF_Paper" /><ref name="AutoVRSE">{{cite web |url=https://www.autovrse.com/foveated-rendering |title=What is Foveated Rendering? - autovrse}}</ref> | ||
* '''Geometric Simplification:''' The geometric complexity of the scene can be reduced in the periphery. This involves using lower-polygon [[level of detail]] models for objects that are outside the user's direct gaze. | * '''Geometric Simplification:''' The geometric complexity of the scene can be reduced in the periphery. This involves using lower-polygon [[level of detail]] models for objects that are outside the user's direct gaze. | ||
| Line 74: | Line 74: | ||
==== Variable Rate Shading (VRS) ==== | ==== Variable Rate Shading (VRS) ==== | ||
[[Variable Rate Shading]] (VRS) is a hardware feature available on modern GPUs ( | [[Variable Rate Shading]] (VRS) is a hardware feature available on modern GPUs (for example [[NVIDIA]] Turing architecture and newer, [[AMD]] RDNA 2 and newer, [[Intel]] Gen11+) that provides fine-grained control over the pixel shading rate.<ref name="TOYF_Paper" /><ref name="OpenXRToolkit">{{cite web |url=https://mbucchia.github.io/OpenXR-Toolkit/fr.html |title=Foveated Rendering - OpenXR Toolkit}}</ref><ref name="microsoft2019">{{cite web |url=https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/ |title=Variable Rate Shading: a scalpel in a world of sledgehammers |publisher=Microsoft DirectX Blog |year=2019}}</ref> It allows a single pixel shader operation to compute the color for a block of pixels, such as a 2x2 or 4x4 block, instead of just a single pixel.<ref name="VarjoAPI">{{cite web |url=https://developer.varjo.com/docs/native/foveated-rendering-api |title=Foveated Rendering - Varjo for Developers}}</ref><ref name="PicoUnrealOpenXR">{{cite web |url=https://developer.picoxr.com/document/unreal-openxr/fixed-foveated-rendering/ |title=Foveated rendering - PICO Unreal OpenXR Plugin}}</ref> The technique supports shading rates from 1×1 (full quality) to 4×4 (coarse, one shade per 16 pixels). | ||
==== Multi-View Rendering & Quad Views ==== | ==== Multi-View Rendering & Quad Views ==== | ||
| Line 111: | Line 111: | ||
'''Advantages:''' | '''Advantages:''' | ||
* '''Maximum Performance:''' ETFR allows for much more aggressive | * '''Maximum Performance:''' ETFR allows for much more aggressive foveation, a smaller foveal region and a more significant quality reduction in the periphery, resulting in substantially greater performance and power savings.<ref name="VRX_FR_Types" /><ref name="PicoUnrealOpenXR" /> | ||
* '''Perceptually Seamless:''' When implemented with low latency, the effect is imperceptible to the user.<ref name="VarjoWhatIs" /> | * '''Perceptually Seamless:''' When implemented with low latency, the effect is imperceptible to the user.<ref name="VarjoWhatIs" /> | ||
| Line 135: | Line 135: | ||
|- | |- | ||
! Performance Savings | ! Performance Savings | ||
| Moderate. Reduces GPU load by rendering fewer pixels in the periphery ( | | Moderate. Reduces GPU load by rendering fewer pixels in the periphery (for example 26-43% savings reported for Meta Quest).<ref name="QuestProPerformance">{{cite web |url=https://www.uploadvr.com/quest-pro-foveated-rendering-performance/ |title=Quest Pro Foveated Rendering GPU Savings Detailed}}</ref> | Significant. Allows for more aggressive degradation, leading to greater GPU savings (for example 33-52% savings reported for Meta Quest Pro).<ref name="VRX_FR_Types" /><ref name="QuestProPerformance" /> | ||
|- | |- | ||
! Ideal Use Cases | ! Ideal Use Cases | ||
| Line 229: | Line 229: | ||
| 2019 (December) || SDK support || [[Meta Quest|Oculus Quest]] || Fixed Foveated Rendering exposed in SDK, marking first large-scale commercial deployment.<ref name="venturebeat2019">{{cite web |url=https://venturebeat.com/2019/12/22/oculus-quest-gets-dynamic-fixed-foveated-rendering/ |title=Oculus Quest gets dynamic fixed foveated rendering |publisher=VentureBeat |date=2019-12-22}}</ref> | | 2019 (December) || SDK support || [[Meta Quest|Oculus Quest]] || Fixed Foveated Rendering exposed in SDK, marking first large-scale commercial deployment.<ref name="venturebeat2019">{{cite web |url=https://venturebeat.com/2019/12/22/oculus-quest-gets-dynamic-fixed-foveated-rendering/ |title=Oculus Quest gets dynamic fixed foveated rendering |publisher=VentureBeat |date=2019-12-22}}</ref> | ||
|- | |- | ||
| 2020 || Neural reconstruction || Facebook Reality Labs || [[DeepFovea]] demonstrated AI-based foveated reconstruction with up to 10-14× pixel count reduction.<ref name="deepfovea2019">{{cite web |url=https://dl.acm.org/doi/10.1145/3306307.3328186 |title=DeepFovea: Neural Reconstruction for Foveated Rendering |publisher=ACM SIGGRAPH |year=2019}}</ref> | | 2020 || Neural reconstruction || [[Facebook Reality Labs]] || [[DeepFovea]] demonstrated AI-based foveated reconstruction with up to 10-14× pixel count reduction.<ref name="deepfovea2019">{{cite web |url=https://dl.acm.org/doi/10.1145/3306307.3328186 |title=DeepFovea: Neural Reconstruction for Foveated Rendering |publisher=ACM SIGGRAPH |year=2019}}</ref> | ||
|- | |- | ||
| 2021 || Chipset integration || Qualcomm XR2 || Built-in support for foveated rendering and eye tracking in standalone VR chipset.<ref name="WikipediaFR" /> | | 2021 || Chipset integration || [[Qualcomm XR2]] || Built-in support for foveated rendering and eye tracking in standalone VR chipset.<ref name="WikipediaFR" /> | ||
|- | |- | ||
| 2022 || Consumer ETFR || [[Meta Quest Pro]] || First mainstream standalone headset with Eye-Tracked Foveated Rendering, achieving 33-52% performance gains.<ref name="uploadvr2022" /><ref name="WikipediaFR" /> | | 2022 || Consumer ETFR || [[Meta Quest Pro]] || First mainstream standalone headset with Eye-Tracked Foveated Rendering, achieving 33-52% performance gains.<ref name="uploadvr2022" /><ref name="WikipediaFR" /> | ||
| Line 361: | Line 361: | ||
=== Hardware Limitations === | === Hardware Limitations === | ||
* '''Mobile vs Desktop Performance:''' Mobile GPU architectures see smaller benefits than console/desktop | * '''Mobile vs Desktop Performance:''' Mobile GPU architectures see smaller benefits than console/desktop GPUs, Quest Pro achieves 33-45% savings while PSVR2 reaches 72%.<ref name="uploadvr2022" /><ref name="unity2023" /> | ||
* '''Cost and Complexity:''' Eye-tracking hardware increases headset cost, weight, and power consumption. | * '''Cost and Complexity:''' Eye-tracking hardware increases headset cost, weight, and power consumption. | ||
* '''Calibration Requirements:''' Individual calibration typically required for each user to map eye movements accurately. | * '''Calibration Requirements:''' Individual calibration typically required for each user to map eye movements accurately. | ||