Jump to content

Fixed foveated rendering: Difference between revisions

Undo revision 34505 by Xinreality (talk)
Tag: Undo
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{see also|Terms|Technical Terms}}
{{see also|Terms|Technical Terms}}
[[Fixed foveated rendering]] ('''FFR''') is a performance optimization technique used primarily in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) applications. It leverages the characteristics of the [[human visual system]] specifically the high acuity in the center of vision ([[fovea]]) and lower acuity in the [[peripheral vision]] to reduce the rendering workload on the [[Graphics Processing Unit]] (GPU) without significantly impacting the perceived visual quality for the user.<ref name="QualcommFFRDef">Qualcomm Developer Network. "Foveated Rendering". Retrieved April 25, 2025.</ref><ref name="MetaFFR">Meta Quest Developer Center. "Foveated Rendering". Retrieved April 25, 2025.</ref> Unlike [[Dynamic Foveated Rendering]] (DFR), which uses [[eye tracking]] to dynamically adjust the high-resolution area based on the user's gaze, FFR renders a *fixed* area, typically the center of the display, at the highest quality, while progressively reducing the rendering quality towards the edges (periphery).<ref name="Patney2016">Patney, A., Salvi, M., Kim, J., Kaplanyan, A., Wyman, C., Benty, N., Luebke, D., & Lefohn, A. (2016). Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics, 35(6), 179:1–179:12.</ref> This technique significantly reduces the [[computational load]] on the GPU by allocating rendering resources more efficiently, leading to potential improvements in performance, higher [[frame rate]]s, reduced [[latency (engineering)|latency]], and extended battery life in mobile VR/AR systems.<ref name="Bastani2017">Bastani, B., Turner, E., Vieri, C., Jiang, H., Funt, B., & Balram, N. (2017). Foveated pipeline for AR/VR head-mounted displays. Information Display, 33(6), 14-35.</ref>
[[Fixed foveated rendering]] ('''FFR''') is a performance optimization technique used primarily in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) applications. It leverages the characteristics of the [[human visual system]], specifically the high acuity in the center of vision ([[fovea]]) and lower acuity in the [[peripheral vision]], to reduce the rendering workload on the [[Graphics Processing Unit]] (GPU) without significantly impacting the perceived visual quality for the user.<ref name="QualcommFFRDef">Qualcomm Developer Network. "Foveated Rendering". Retrieved April 25, 2025.</ref><ref name="MetaFFR">Meta Quest Developer Center. "Foveated Rendering". Retrieved April 25, 2025.</ref> Unlike [[Dynamic Foveated Rendering]] (DFR), which uses [[eye tracking]] to dynamically adjust the high-resolution area based on the user's gaze, FFR renders a *fixed* area, typically the center of the display, at the highest quality, while progressively reducing the rendering quality towards the edges (periphery).<ref name="Patney2016">Patney, A., Salvi, M., Kim, J., Kaplanyan, A., Wyman, C., Benty, N., Luebke, D., & Lefohn, A. (2016). Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics, 35(6), 179:1-179:12.</ref> This technique significantly reduces the [[computational load]] on the GPU by allocating rendering resources more efficiently, leading to potential improvements in performance, higher [[frame rate]]s, reduced [[latency (engineering)|latency]], and extended battery life in mobile VR/AR systems.<ref name="Bastani2017">Bastani, B., Turner, E., Vieri, C., Jiang, H., Funt, B., & Balram, N. (2017). Foveated pipeline for AR/VR head-mounted displays. Information Display, 33(6), 14-35.</ref>


==How it Works==
==How it Works==
The human eye does not perceive detail uniformly across its entire [[field of view]]. The [[fovea]], a small pit in the [[retina]], is densely packed with [[cone cell]]s and responsible for sharp central vision (e.g., reading, focusing on details). Visual acuity is highest in this central region, covering only about 1-2 degrees of the [[visual field]], but it drops significantly in the surrounding areas. Outside this central area, the density of cones decreases, and [[peripheral vision]], while more sensitive to motion and contrast, has much lower spatial resolution (acuity).<ref name="HumanVisionBasics">Webvision. "Facts and Figures Concerning the Human Retina". University of Utah. Retrieved April 25, 2025.</ref><ref name="Strasburger2011">Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11(5), 13.</ref>
The human eye does not perceive detail uniformly across its entire [[field of view]]. The [[fovea]], a small pit in the [[retina]], is densely packed with [[cone cell]]s and responsible for sharp central vision (for example reading, focusing on details). Visual acuity is highest in this central region, covering only about 1-2 degrees of the [[visual field]], but it drops significantly in the surrounding areas. Outside this central area, the density of cones decreases, and [[peripheral vision]], while more sensitive to motion and contrast, has much lower spatial resolution (acuity).<ref name="HumanVisionBasics">Webvision. "Facts and Figures Concerning the Human Retina". University of Utah. Retrieved April 25, 2025.</ref><ref name="Strasburger2011">Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11(5), 13.</ref>


Fixed Foveated Rendering exploits this biological characteristic. The [[rendering pipeline]] is instructed to render the central region of the image (where the user is assumed to be looking most of the time, typically straight ahead) at full resolution or shading quality. Surrounding this central high-quality region, one or more concentric "rings" or areas are rendered at progressively lower quality.<ref name="NvidiaVRSFFR">NVIDIA Developer Blog. (2019-03-18). "Variable Rate Shading: A Volumetric Exploration". Retrieved April 25, 2025.</ref>
Fixed Foveated Rendering exploits this biological characteristic. The [[rendering pipeline]] is instructed to render the central region of the image (where the user is assumed to be looking most of the time, typically straight ahead) at full resolution or shading quality. Surrounding this central high-quality region, one or more concentric "rings" or areas are rendered at progressively lower quality.<ref name="NvidiaVRSFFR">NVIDIA Developer Blog. (2019-03-18). "Variable Rate Shading: A Volumetric Exploration". Retrieved April 25, 2025.</ref>
Line 9: Line 9:
===Resolution Zones===
===Resolution Zones===
FFR typically divides the [[viewport]] into 2-4 distinct zones, rendered at different quality levels:
FFR typically divides the [[viewport]] into 2-4 distinct zones, rendered at different quality levels:
* '''Foveal zone:''' The central area rendered at full resolution (e.g., 1x shading rate).
* '''Foveal zone:''' The central area rendered at full resolution (for example 1x shading rate).
* '''Parafoveal zone:''' An intermediate area surrounding the center, rendered with moderate resolution or shading reduction (e.g., 1/2x or 1/4x rate).
* '''Parafoveal zone:''' An intermediate area surrounding the center, rendered with moderate resolution or shading reduction (for example 1/2x or 1/4x rate).
* '''Peripheral zone:''' The outer area rendered with significant resolution or shading reduction (e.g., 1/8x or 1/16x rate).
* '''Peripheral zone:''' The outer area rendered with significant resolution or shading reduction (for example 1/8x or 1/16x rate).
* '''Far peripheral zone:''' (Sometimes implemented) The extreme edges rendered at the lowest quality.
* '''Far peripheral zone:''' (Sometimes implemented) The extreme edges rendered at the lowest quality.


Line 20: Line 20:


===[[Variable Rate Shading]] (VRS)===
===[[Variable Rate Shading]] (VRS)===
This is the most common and efficient modern technique. Modern GPUs from [[NVIDIA]] (Turing architecture onwards), [[AMD]] (RDNA architecture onwards), and mobile GPU vendors like [[Qualcomm]] (Adreno) support VRS.<ref name="NvidiaVRSFFR"/><ref name="AMDRDNA2">AMD. "AMD RDNA™ 2 Architecture". Retrieved April 25, 2025.</ref><ref name="QualcommFFRDef"/> VRS allows developers to specify the rate at which pixel shaders (also known as [[fragment shader]]s) are executed for different regions of the screen. In the peripheral zones defined by FFR, a single shader operation can be used to color a larger block of pixels (e.g., 2x2, 4x4), significantly reducing the computational load.<ref name="MicrosoftVRS">Microsoft DirectX Dev Blog. (2019-03-19). "DirectX 12 Variable Rate Shading". Retrieved April 25, 2025.</ref><ref name="KhronosVRS">Khronos Group. "Vulkan API Specification - VK_KHR_fragment_shading_rate". Retrieved April 25, 2025.</ref> APIs like [[DirectX 12 Ultimate]], [[Vulkan]], and [[OpenGL]] extensions provide interfaces for VRS.<ref name="MicrosoftVRS"/><ref name="KhronosVRS"/> Developers specify a "shading rate map" or use geometric primitives to define which parts of the screen receive full or reduced shading effort. FFR often uses a simple, static map favouring the center.<ref name="Vaidyanathan2014">Vaidyanathan, K., Salvi, M., Toth, R., Foley, T., Akenine-Möller, T., Nilsson, J., Munkberg, J., Hasselgren, J., Sugihara, M., Clarberg, P., & Lefohn, A. (2014). Coarse pixel shading. In High Performance Graphics, 9-18.</ref>
This is the most common and efficient modern technique. Modern GPUs from [[NVIDIA]] (Turing architecture onwards), [[AMD]] (RDNA architecture onwards), and mobile GPU vendors like [[Qualcomm]] (Adreno) support VRS.<ref name="NvidiaVRSFFR"/><ref name="AMDRDNA2">AMD. "AMD RDNA™ 2 Architecture". Retrieved April 25, 2025.</ref><ref name="QualcommFFRDef"/> VRS allows developers to specify the rate at which pixel shaders (also known as [[fragment shader]]s) are executed for different regions of the screen. In the peripheral zones defined by FFR, a single shader operation can be used to color a larger block of pixels (for example 2x2, 4x4), significantly reducing the computational load.<ref name="MicrosoftVRS">Microsoft DirectX Dev Blog. (2019-03-19). "DirectX 12 Variable Rate Shading". Retrieved April 25, 2025.</ref><ref name="KhronosVRS">Khronos Group. "Vulkan API Specification - VK_KHR_fragment_shading_rate". Retrieved April 25, 2025.</ref> APIs like [[DirectX 12 Ultimate]], [[Vulkan]], and [[OpenGL]] extensions provide interfaces for VRS.<ref name="MicrosoftVRS"/><ref name="KhronosVRS"/> Developers specify a "shading rate map" or use geometric primitives to define which parts of the screen receive full or reduced shading effort. FFR often uses a simple, static map favouring the center.<ref name="Vaidyanathan2014">Vaidyanathan, K., Salvi, M., Toth, R., Foley, T., Akenine-Möller, T., Nilsson, J., Munkberg, J., Hasselgren, J., Sugihara, M., Clarberg, P., & Lefohn, A. (2014). Coarse pixel shading. In High Performance Graphics, 9-18.</ref>


===Multi-Resolution Rendering===
===Multi-Resolution Rendering===
Line 28: Line 28:
A specialized form of foveated rendering primarily for VR that accounts for the inherent distortion introduced by [[VR headset]] lenses. Since VR lenses naturally compress the pixel density towards the periphery when projecting onto the curved lens surface, lens-matched shading aligns the foveation pattern (reducing shading rate or resolution) with this optical distortion. This synergy avoids redundant rendering effort in areas already optically compressed, maximizing efficiency.<ref name="Vlachos2016">Vlachos, A. (2016). Advanced VR Rendering. Game Developers Conference 2016.</ref>
A specialized form of foveated rendering primarily for VR that accounts for the inherent distortion introduced by [[VR headset]] lenses. Since VR lenses naturally compress the pixel density towards the periphery when projecting onto the curved lens surface, lens-matched shading aligns the foveation pattern (reducing shading rate or resolution) with this optical distortion. This synergy avoids redundant rendering effort in areas already optically compressed, maximizing efficiency.<ref name="Vlachos2016">Vlachos, A. (2016). Advanced VR Rendering. Game Developers Conference 2016.</ref>


Developers often have control over the intensity of FFR, typically offering profiles (e.g., "Low", "Medium", "High") which adjust the size of the high-quality central region and the degree of quality reduction in the periphery.<ref name="MetaFFR"/> This allows balancing performance gains against potential visual artifacts.
Developers often have control over the intensity of FFR, typically offering profiles (for example "Low", "Medium", "High") which adjust the size of the high-quality central region and the degree of quality reduction in the periphery.<ref name="MetaFFR"/> This allows balancing performance gains against potential visual artifacts.


==Perceptual Considerations and Artifacts==
==Perceptual Considerations and Artifacts==
Line 43: Line 43:
* '''Blurring:''' A general loss of detail in peripheral vision beyond what is naturally perceived.
* '''Blurring:''' A general loss of detail in peripheral vision beyond what is naturally perceived.
* '''Border Visibility:''' Noticeable transitions or lines between the different resolution/shading zones.
* '''Border Visibility:''' Noticeable transitions or lines between the different resolution/shading zones.
* '''Blockiness:''' Visible pixel blocks if very low shading rates (e.g., 4x4) are used.
* '''Blockiness:''' Visible pixel blocks if very low shading rates (for example 4x4) are used.


Techniques such as [[temporal anti-aliasing]] (TAA), careful tuning of zone transitions, adaptive contrast preservation, and ensuring compatibility with [[post-processing (video)|post-processing]] effects help minimize these artifacts.<ref name="Albert2017">Albert, R., Patney, A., Luebke, D., & Kim, J. (2017). Latency requirements for foveated rendering in virtual reality. ACM Transactions on Applied Perception, 14(4), 25:1-25:13.</ref>
Techniques such as [[temporal anti-aliasing]] (TAA), careful tuning of zone transitions, adaptive contrast preservation, and ensuring compatibility with [[post-processing (video)|post-processing]] effects help minimize these artifacts.<ref name="Albert2017">Albert, R., Patney, A., Luebke, D., & Kim, J. (2017). Latency requirements for foveated rendering in virtual reality. ACM Transactions on Applied Perception, 14(4), 25:1-25:13.</ref>
Line 51: Line 51:
* '''Improved Performance:''' The primary benefit is a significant reduction in the GPU workload, particularly fragment shading costs, which are often a bottleneck in high-resolution VR/AR rendering. This leads to higher and more stable [[frame rate]]s.<ref name="QualcommFFRDef"/> Consistent frame rates are crucial for VR to prevent [[motion sickness]]. Performance gains can range from 30-60% depending on the implementation and intensity.<ref name="Kim2019Perf">Kim, J., Toma, S., & Varshney, A. (2019). Perceptual model for adaptive local shading and refresh rate. IEEE VR 2019.</ref>
* '''Improved Performance:''' The primary benefit is a significant reduction in the GPU workload, particularly fragment shading costs, which are often a bottleneck in high-resolution VR/AR rendering. This leads to higher and more stable [[frame rate]]s.<ref name="QualcommFFRDef"/> Consistent frame rates are crucial for VR to prevent [[motion sickness]]. Performance gains can range from 30-60% depending on the implementation and intensity.<ref name="Kim2019Perf">Kim, J., Toma, S., & Varshney, A. (2019). Perceptual model for adaptive local shading and refresh rate. IEEE VR 2019.</ref>
* '''Increased Visual Fidelity Potential:''' By saving GPU resources in the periphery, developers can potentially increase the baseline rendering resolution or enable more demanding graphical effects in the central, high-quality region.
* '''Increased Visual Fidelity Potential:''' By saving GPU resources in the periphery, developers can potentially increase the baseline rendering resolution or enable more demanding graphical effects in the central, high-quality region.
* '''Reduced Power Consumption & Heat:''' Less GPU computation translates directly to lower power draw and reduced thermal load. This is especially important for battery-powered standalone VR headsets (e.g., [[Meta Quest]] series) and AR glasses, extending usage time and improving user comfort.<ref name="MetaFFR"/>
* '''Reduced Power Consumption & Heat:''' Less GPU computation translates directly to lower power draw and reduced thermal load. This is especially important for battery-powered standalone VR headsets (for example [[Meta Quest]] series) and AR glasses, extending usage time and improving user comfort.<ref name="MetaFFR"/>
* '''Simpler Implementation (vs. DFR):''' FFR does not require eye-tracking hardware or the complex software integration associated with it (like low-latency gaze prediction). This makes it easier and cheaper to implement on a wider range of devices.
* '''Simpler Implementation (vs. DFR):''' FFR does not require eye-tracking hardware or the complex software integration associated with it (like low-latency gaze prediction). This makes it easier and cheaper to implement on a wider range of devices.
* '''Compatibility:''' Can often be integrated into existing [[rendering pipeline]]s without complete rewrites, especially with modern API support for VRS.
* '''Compatibility:''' Can often be integrated into existing [[rendering pipeline]]s without complete rewrites, especially with modern API support for VRS.
Line 77: Line 77:
| '''Complexity'''    | Lower (implementation & runtime)          | Higher (hardware integration, calibration, gaze prediction, software stack)
| '''Complexity'''    | Lower (implementation & runtime)          | Higher (hardware integration, calibration, gaze prediction, software stack)
|-
|-
| '''Potential Gains'''| Good (e.g., 30-60% workload reduction)<ref name="Kim2019Perf"/> | Potentially higher (e.g., up to 70-80% reduction)<ref name="Kim2021Compare">Kim, A., Reipschläger, P., & Dachselt, R. (2021). Comparative evaluation of foveated rendering techniques in virtual reality. IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 263-272.</ref> (Allows more aggressive periphery reduction)
| '''Potential Gains'''| Good (for example 30-60% workload reduction)<ref name="Kim2019Perf"/> | Potentially higher (for example up to 70-80% reduction)<ref name="Kim2021Compare">Kim, A., Reipschläger, P., & Dachselt, R. (2021). Comparative evaluation of foveated rendering techniques in virtual reality. IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 263-272.</ref> (Allows more aggressive periphery reduction)
|-
|-
| '''User Experience'''| Can perceive lower quality if looking off-center | High quality generally always where user looks (if tracking is accurate & low latency)
| '''User Experience'''| Can perceive lower quality if looking off-center | High quality generally always where user looks (if tracking is accurate & low latency)
Line 124: Line 124:
When implementing FFR, developers should consider several factors to maximize benefits and minimize drawbacks:
When implementing FFR, developers should consider several factors to maximize benefits and minimize drawbacks:
* '''[[Field of View]] (FoV) Coverage:''' Adjust foveation zone sizes and intensity based on the target headset's FoV. Wider FoV headsets might require different tuning than narrower ones.
* '''[[Field of View]] (FoV) Coverage:''' Adjust foveation zone sizes and intensity based on the target headset's FoV. Wider FoV headsets might require different tuning than narrower ones.
* '''Zone Boundaries:''' Utilize smooth transitions (e.g., using VRS features that allow gradual changes) between quality zones to reduce the visibility of borders.
* '''Zone Boundaries:''' Utilize smooth transitions (for example using VRS features that allow gradual changes) between quality zones to reduce the visibility of borders.
* '''Content Awareness:''' Adapt the technique based on scene characteristics. For example, scenes with less peripheral detail might tolerate more aggressive FFR.
* '''Content Awareness:''' Adapt the technique based on scene characteristics. For example, scenes with less peripheral detail might tolerate more aggressive FFR.
* '''[[Post-processing (video)|Post-Processing]] Compatibility:''' Ensure FFR integrates well with other rendering techniques, particularly [[temporal anti-aliasing]] (TAA) and other post-effects, which can sometimes interact poorly with sharp resolution changes.
* '''[[Post-processing (video)|Post-Processing]] Compatibility:''' Ensure FFR integrates well with other rendering techniques, particularly [[temporal anti-aliasing]] (TAA) and other post-effects, which can sometimes interact poorly with sharp resolution changes.
Line 156: Line 156:
==References==
==References==
<references />
<references />
[[Category:Terms]]
[[Category:Technical Terms]]
[[Category:Rendering (computer graphics)]]
[[Category:Computer graphics techniques]]
[[Category:Virtual reality]]
[[Category:Augmented reality]]
[[Category:Performance optimization (computer graphics)]]
[[Category:Display technology]]
[[Category:Human visual system]]