Fixed foveated rendering: Difference between revisions
Appearance
Xinreality (talk | contribs) Created page with "{{VRAR}} '''Fixed Foveated Rendering''' ('''FFR''') is a performance optimization technique used primarily in Virtual Reality (VR) and Augmented Reality (AR) applications. It leverages the characteristics of the human visual system – specifically the high acuity in the center of vision (fovea) and lower acuity in the peripheral vision – to reduce the rendering workload on the Graphics Processing Unit (GPU) without significantly impacting the p..." |
Xinreality (talk | contribs) No edit summary |
||
| Line 1: | Line 1: | ||
{{ | {{see also|Terms|Technical Terms}} | ||
[[Fixed foveated rendering]] ('''FFR''') is a performance optimization technique used primarily in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) applications. It leverages the characteristics of the [[human visual system]] – specifically the high acuity in the center of vision ([[fovea]]) and lower acuity in the [[peripheral vision]] – to reduce the rendering workload on the [[Graphics Processing Unit]] (GPU) without significantly impacting the perceived visual quality for the user.<ref name="QualcommFFRDef">Qualcomm Developer Network. "Foveated Rendering". Retrieved April 25, 2025.</ref><ref name="MetaFFR">Meta Quest Developer Center. "Foveated Rendering". Retrieved April 25, 2025.</ref> Unlike [[Dynamic Foveated Rendering]] (DFR), which uses [[eye tracking]] to dynamically adjust the high-resolution area based on the user's gaze, FFR renders a *fixed* area, typically the center of the display, at the highest quality, while progressively reducing the rendering quality towards the edges (periphery).<ref name="Patney2016">Patney, A., Salvi, M., Kim, J., Kaplanyan, A., Wyman, C., Benty, N., Luebke, D., & Lefohn, A. (2016). Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics, 35(6), 179:1–179:12.</ref> This technique significantly reduces the [[computational load]] on the GPU by allocating rendering resources more efficiently, leading to potential improvements in performance, higher [[frame rate]]s, reduced [[latency (engineering)|latency]], and extended battery life in mobile VR/AR systems.<ref name="Bastani2017">Bastani, B., Turner, E., Vieri, C., Jiang, H., Funt, B., & Balram, N. (2017). Foveated pipeline for AR/VR head-mounted displays. Information Display, 33(6), 14-35.</ref> | |||
== How it Works == | ==How it Works== | ||
The human eye does not perceive detail uniformly across its entire [[field of view]]. The [[fovea]], a small pit in the [[retina]], is densely packed with [[cone cell]]s and responsible for sharp central vision (e.g., reading, focusing on details). Visual acuity is highest in this central region, covering only about 1-2 degrees of the [[visual field]], but it drops significantly in the surrounding areas. Outside this central area, the density of cones decreases, and [[peripheral vision]], while more sensitive to motion and contrast, has much lower spatial resolution (acuity).<ref name="HumanVisionBasics">Webvision. "Facts and Figures Concerning the Human Retina". University of Utah. Retrieved April 25, 2025.</ref><ref name="Strasburger2011">Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11(5), 13.</ref> | The human eye does not perceive detail uniformly across its entire [[field of view]]. The [[fovea]], a small pit in the [[retina]], is densely packed with [[cone cell]]s and responsible for sharp central vision (e.g., reading, focusing on details). Visual acuity is highest in this central region, covering only about 1-2 degrees of the [[visual field]], but it drops significantly in the surrounding areas. Outside this central area, the density of cones decreases, and [[peripheral vision]], while more sensitive to motion and contrast, has much lower spatial resolution (acuity).<ref name="HumanVisionBasics">Webvision. "Facts and Figures Concerning the Human Retina". University of Utah. Retrieved April 25, 2025.</ref><ref name="Strasburger2011">Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11(5), 13.</ref> | ||
Fixed Foveated Rendering exploits this biological characteristic. The [[rendering pipeline]] is instructed to render the central region of the image (where the user is assumed to be looking most of the time, typically straight ahead) at full resolution or shading quality. Surrounding this central high-quality region, one or more concentric "rings" or areas are rendered at progressively lower quality.<ref name="NvidiaVRSFFR">NVIDIA Developer Blog. (2019-03-18). "Variable Rate Shading: A Volumetric Exploration". Retrieved April 25, 2025.</ref> | Fixed Foveated Rendering exploits this biological characteristic. The [[rendering pipeline]] is instructed to render the central region of the image (where the user is assumed to be looking most of the time, typically straight ahead) at full resolution or shading quality. Surrounding this central high-quality region, one or more concentric "rings" or areas are rendered at progressively lower quality.<ref name="NvidiaVRSFFR">NVIDIA Developer Blog. (2019-03-18). "Variable Rate Shading: A Volumetric Exploration". Retrieved April 25, 2025.</ref> | ||
=== Resolution Zones === | ===Resolution Zones=== | ||
FFR typically divides the [[viewport]] into 2-4 distinct zones, rendered at different quality levels: | FFR typically divides the [[viewport]] into 2-4 distinct zones, rendered at different quality levels: | ||
* '''Foveal zone:''' The central area rendered at full resolution (e.g., 1x shading rate). | * '''Foveal zone:''' The central area rendered at full resolution (e.g., 1x shading rate). | ||
| Line 16: | Line 16: | ||
The boundaries between these zones and the specific resolution or shading reduction factors are carefully tuned by developers to balance performance gains against potential visual artifacts.<ref name="Meng2020">Meng, X., Du, R., Varshney, A. (2020). Eye-dominance-guided Foveated Rendering. IEEE Transactions on Visualization and Computer Graphics, 26(5), 1972-1980.</ref> The key aspect of FFR is that these regions are *fixed* relative to the display or viewport, assuming the user's fovea is generally aligned with the center. | The boundaries between these zones and the specific resolution or shading reduction factors are carefully tuned by developers to balance performance gains against potential visual artifacts.<ref name="Meng2020">Meng, X., Du, R., Varshney, A. (2020). Eye-dominance-guided Foveated Rendering. IEEE Transactions on Visualization and Computer Graphics, 26(5), 1972-1980.</ref> The key aspect of FFR is that these regions are *fixed* relative to the display or viewport, assuming the user's fovea is generally aligned with the center. | ||
== Technical Implementation == | ==Technical Implementation== | ||
FFR is typically implemented using specific [[GPU]] features and graphics [[API]]s. Several technical approaches exist: | FFR is typically implemented using specific [[GPU]] features and graphics [[API]]s. Several technical approaches exist: | ||
=== [[Variable Rate Shading]] (VRS) === | ===[[Variable Rate Shading]] (VRS)=== | ||
This is the most common and efficient modern technique. Modern GPUs from [[NVIDIA]] (Turing architecture onwards), [[AMD]] (RDNA architecture onwards), and mobile GPU vendors like [[Qualcomm]] (Adreno) support VRS.<ref name="NvidiaVRSFFR"/><ref name="AMDRDNA2">AMD. "AMD RDNA™ 2 Architecture". Retrieved April 25, 2025.</ref><ref name="QualcommFFRDef"/> VRS allows developers to specify the rate at which pixel shaders (also known as [[fragment shader]]s) are executed for different regions of the screen. In the peripheral zones defined by FFR, a single shader operation can be used to color a larger block of pixels (e.g., 2x2, 4x4), significantly reducing the computational load.<ref name="MicrosoftVRS">Microsoft DirectX Dev Blog. (2019-03-19). "DirectX 12 Variable Rate Shading". Retrieved April 25, 2025.</ref><ref name="KhronosVRS">Khronos Group. "Vulkan API Specification - VK_KHR_fragment_shading_rate". Retrieved April 25, 2025.</ref> APIs like [[DirectX 12 Ultimate]], [[Vulkan]], and [[OpenGL]] extensions provide interfaces for VRS.<ref name="MicrosoftVRS"/><ref name="KhronosVRS"/> Developers specify a "shading rate map" or use geometric primitives to define which parts of the screen receive full or reduced shading effort. FFR often uses a simple, static map favouring the center.<ref name="Vaidyanathan2014">Vaidyanathan, K., Salvi, M., Toth, R., Foley, T., Akenine-Möller, T., Nilsson, J., Munkberg, J., Hasselgren, J., Sugihara, M., Clarberg, P., & Lefohn, A. (2014). Coarse pixel shading. In High Performance Graphics, 9-18.</ref> | This is the most common and efficient modern technique. Modern GPUs from [[NVIDIA]] (Turing architecture onwards), [[AMD]] (RDNA architecture onwards), and mobile GPU vendors like [[Qualcomm]] (Adreno) support VRS.<ref name="NvidiaVRSFFR"/><ref name="AMDRDNA2">AMD. "AMD RDNA™ 2 Architecture". Retrieved April 25, 2025.</ref><ref name="QualcommFFRDef"/> VRS allows developers to specify the rate at which pixel shaders (also known as [[fragment shader]]s) are executed for different regions of the screen. In the peripheral zones defined by FFR, a single shader operation can be used to color a larger block of pixels (e.g., 2x2, 4x4), significantly reducing the computational load.<ref name="MicrosoftVRS">Microsoft DirectX Dev Blog. (2019-03-19). "DirectX 12 Variable Rate Shading". Retrieved April 25, 2025.</ref><ref name="KhronosVRS">Khronos Group. "Vulkan API Specification - VK_KHR_fragment_shading_rate". Retrieved April 25, 2025.</ref> APIs like [[DirectX 12 Ultimate]], [[Vulkan]], and [[OpenGL]] extensions provide interfaces for VRS.<ref name="MicrosoftVRS"/><ref name="KhronosVRS"/> Developers specify a "shading rate map" or use geometric primitives to define which parts of the screen receive full or reduced shading effort. FFR often uses a simple, static map favouring the center.<ref name="Vaidyanathan2014">Vaidyanathan, K., Salvi, M., Toth, R., Foley, T., Akenine-Möller, T., Nilsson, J., Munkberg, J., Hasselgren, J., Sugihara, M., Clarberg, P., & Lefohn, A. (2014). Coarse pixel shading. In High Performance Graphics, 9-18.</ref> | ||
=== Multi-Resolution Rendering === | ===Multi-Resolution Rendering=== | ||
This technique involves rendering peripheral areas to lower-resolution intermediate [[render target]]s and then upscaling and combining them with the high-resolution center render. The display is divided into regions that are rendered to separate render targets at different resolutions, which are then composited. This approach reduces both shading and geometry processing load and can offer significant performance benefits but may exhibit more noticeable boundaries or transitions between zones if not carefully implemented.<ref name="Swafford2016">Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016). User, metric, and computational evaluation of foveated rendering methods. Proceedings of the ACM Symposium on Applied Perception, 7-14.</ref> | This technique involves rendering peripheral areas to lower-resolution intermediate [[render target]]s and then upscaling and combining them with the high-resolution center render. The display is divided into regions that are rendered to separate render targets at different resolutions, which are then composited. This approach reduces both shading and geometry processing load and can offer significant performance benefits but may exhibit more noticeable boundaries or transitions between zones if not carefully implemented.<ref name="Swafford2016">Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016). User, metric, and computational evaluation of foveated rendering methods. Proceedings of the ACM Symposium on Applied Perception, 7-14.</ref> | ||
=== Lens-Matched Shading === | ===Lens-Matched Shading=== | ||
A specialized form of foveated rendering primarily for VR that accounts for the inherent distortion introduced by [[VR headset]] lenses. Since VR lenses naturally compress the pixel density towards the periphery when projecting onto the curved lens surface, lens-matched shading aligns the foveation pattern (reducing shading rate or resolution) with this optical distortion. This synergy avoids redundant rendering effort in areas already optically compressed, maximizing efficiency.<ref name="Vlachos2016">Vlachos, A. (2016). Advanced VR Rendering. Game Developers Conference 2016.</ref> | A specialized form of foveated rendering primarily for VR that accounts for the inherent distortion introduced by [[VR headset]] lenses. Since VR lenses naturally compress the pixel density towards the periphery when projecting onto the curved lens surface, lens-matched shading aligns the foveation pattern (reducing shading rate or resolution) with this optical distortion. This synergy avoids redundant rendering effort in areas already optically compressed, maximizing efficiency.<ref name="Vlachos2016">Vlachos, A. (2016). Advanced VR Rendering. Game Developers Conference 2016.</ref> | ||
Developers often have control over the intensity of FFR, typically offering profiles (e.g., "Low", "Medium", "High") which adjust the size of the high-quality central region and the degree of quality reduction in the periphery.<ref name="MetaFFR"/> This allows balancing performance gains against potential visual artifacts. | Developers often have control over the intensity of FFR, typically offering profiles (e.g., "Low", "Medium", "High") which adjust the size of the high-quality central region and the degree of quality reduction in the periphery.<ref name="MetaFFR"/> This allows balancing performance gains against potential visual artifacts. | ||
== Perceptual Considerations and Artifacts == | ==Perceptual Considerations and Artifacts== | ||
===Visual Acuity Distribution=== | |||
=== Visual Acuity Distribution === | |||
As mentioned, human visual acuity varies dramatically across the visual field. The fovea covers only about 1-2 degrees but contains the highest concentration of cone cells. Acuity drops significantly within a few degrees from the center and continues to decrease toward the periphery.<ref name="Strasburger2011"/> | As mentioned, human visual acuity varies dramatically across the visual field. The fovea covers only about 1-2 degrees but contains the highest concentration of cone cells. Acuity drops significantly within a few degrees from the center and continues to decrease toward the periphery.<ref name="Strasburger2011"/> | ||
=== Contrast Sensitivity and Motion Detection === | ===Contrast Sensitivity and Motion Detection=== | ||
While resolution sensitivity decreases in peripheral vision, other forms of visual sensitivity, particularly to motion and high contrast, remain relatively effective in the periphery. Effective FFR implementations must account for these factors to avoid creating easily noticeable artifacts, especially during movement.<ref name="Tursun2019">Tursun, O. T., Arabadzhiyska-Koleva, E., Wernikowski, M., Mantiuk, R., Seidel, H. P., Myszkowski, K., & Didyk, P. (2019). Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics, 38(4), 98:1-98:14.</ref> | While resolution sensitivity decreases in peripheral vision, other forms of visual sensitivity, particularly to motion and high contrast, remain relatively effective in the periphery. Effective FFR implementations must account for these factors to avoid creating easily noticeable artifacts, especially during movement.<ref name="Tursun2019">Tursun, O. T., Arabadzhiyska-Koleva, E., Wernikowski, M., Mantiuk, R., Seidel, H. P., Myszkowski, K., & Didyk, P. (2019). Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics, 38(4), 98:1-98:14.</ref> | ||
=== Potential Artifacts === | ===Potential Artifacts=== | ||
Depending on the aggressiveness of the FFR settings and the implementation technique, users might perceive artifacts, especially if looking off-center. Common artifacts include: | Depending on the aggressiveness of the FFR settings and the implementation technique, users might perceive artifacts, especially if looking off-center. Common artifacts include: | ||
* '''Shimmering:''' Temporal instability or aliasing in the lower-resolution regions. | * '''Shimmering:''' Temporal instability or aliasing in the lower-resolution regions. | ||
| Line 48: | Line 47: | ||
Techniques such as [[temporal anti-aliasing]] (TAA), careful tuning of zone transitions, adaptive contrast preservation, and ensuring compatibility with [[post-processing (video)|post-processing]] effects help minimize these artifacts.<ref name="Albert2017">Albert, R., Patney, A., Luebke, D., & Kim, J. (2017). Latency requirements for foveated rendering in virtual reality. ACM Transactions on Applied Perception, 14(4), 25:1-25:13.</ref> | Techniques such as [[temporal anti-aliasing]] (TAA), careful tuning of zone transitions, adaptive contrast preservation, and ensuring compatibility with [[post-processing (video)|post-processing]] effects help minimize these artifacts.<ref name="Albert2017">Albert, R., Patney, A., Luebke, D., & Kim, J. (2017). Latency requirements for foveated rendering in virtual reality. ACM Transactions on Applied Perception, 14(4), 25:1-25:13.</ref> | ||
== Advantages and Disadvantages == | ==Advantages and Disadvantages== | ||
===Advantages=== | |||
=== Advantages === | |||
* '''Improved Performance:''' The primary benefit is a significant reduction in the GPU workload, particularly fragment shading costs, which are often a bottleneck in high-resolution VR/AR rendering. This leads to higher and more stable [[frame rate]]s.<ref name="QualcommFFRDef"/> Consistent frame rates are crucial for VR to prevent [[motion sickness]]. Performance gains can range from 30-60% depending on the implementation and intensity.<ref name="Kim2019Perf">Kim, J., Toma, S., & Varshney, A. (2019). Perceptual model for adaptive local shading and refresh rate. IEEE VR 2019.</ref> | * '''Improved Performance:''' The primary benefit is a significant reduction in the GPU workload, particularly fragment shading costs, which are often a bottleneck in high-resolution VR/AR rendering. This leads to higher and more stable [[frame rate]]s.<ref name="QualcommFFRDef"/> Consistent frame rates are crucial for VR to prevent [[motion sickness]]. Performance gains can range from 30-60% depending on the implementation and intensity.<ref name="Kim2019Perf">Kim, J., Toma, S., & Varshney, A. (2019). Perceptual model for adaptive local shading and refresh rate. IEEE VR 2019.</ref> | ||
* '''Increased Visual Fidelity Potential:''' By saving GPU resources in the periphery, developers can potentially increase the baseline rendering resolution or enable more demanding graphical effects in the central, high-quality region. | * '''Increased Visual Fidelity Potential:''' By saving GPU resources in the periphery, developers can potentially increase the baseline rendering resolution or enable more demanding graphical effects in the central, high-quality region. | ||
| Line 57: | Line 55: | ||
* '''Compatibility:''' Can often be integrated into existing [[rendering pipeline]]s without complete rewrites, especially with modern API support for VRS. | * '''Compatibility:''' Can often be integrated into existing [[rendering pipeline]]s without complete rewrites, especially with modern API support for VRS. | ||
=== Disadvantages and Limitations === | ===Disadvantages and Limitations=== | ||
* '''Fixed High-Quality Region:''' The main limitation is that the high-resolution area doesn't follow the user's actual gaze. If the user looks towards the periphery using their eyes (without turning their head), they may notice the lower resolution or shading quality, which can be distracting or break immersion.<ref name="UploadVRFFRvsDFR">UploadVR. (2022-07-15). "What Is Foveated Rendering For VR & How Does It Work?". Retrieved April 25, 2025.</ref> | * '''Fixed High-Quality Region:''' The main limitation is that the high-resolution area doesn't follow the user's actual gaze. If the user looks towards the periphery using their eyes (without turning their head), they may notice the lower resolution or shading quality, which can be distracting or break immersion.<ref name="UploadVRFFRvsDFR">UploadVR. (2022-07-15). "What Is Foveated Rendering For VR & How Does It Work?". Retrieved April 25, 2025.</ref> | ||
* '''Potential Artifacts:''' As mentioned previously, aggressive FFR settings can lead to noticeable visual artifacts (blurring, shimmering, borders), especially in the transition zones or during fast motion. | * '''Potential Artifacts:''' As mentioned previously, aggressive FFR settings can lead to noticeable visual artifacts (blurring, shimmering, borders), especially in the transition zones or during fast motion. | ||
| Line 64: | Line 62: | ||
* '''Tuning Required:''' The optimal FFR settings (zone sizes, quality levels) can vary significantly between applications, headsets (due to FoV and lens differences), and user sensitivity. Developers need to carefully tune these parameters. | * '''Tuning Required:''' The optimal FFR settings (zone sizes, quality levels) can vary significantly between applications, headsets (due to FoV and lens differences), and user sensitivity. Developers need to carefully tune these parameters. | ||
== Comparison to Dynamic Foveated Rendering == | ==Comparison to Dynamic Foveated Rendering== | ||
[[Dynamic Foveated Rendering]] (DFR), also known as Gaze-Contingent Rendering, uses integrated eye-tracking sensors within the VR/AR device to precisely determine where the user is looking in real-time. The high-quality rendering region is then dynamically moved to match the user's gaze position frame by frame.<ref name="TobiiDFR">Tobii. "Foveated rendering". Retrieved April 25, 2025.</ref> FFR is often compared to DFR: | [[Dynamic Foveated Rendering]] (DFR), also known as Gaze-Contingent Rendering, uses integrated eye-tracking sensors within the VR/AR device to precisely determine where the user is looking in real-time. The high-quality rendering region is then dynamically moved to match the user's gaze position frame by frame.<ref name="TobiiDFR">Tobii. "Foveated rendering". Retrieved April 25, 2025.</ref> FFR is often compared to DFR: | ||
| Line 90: | Line 88: | ||
DFR offers the potential for greater performance gains and a more consistently high-quality visual experience because the peripheral region can often be reduced in quality more aggressively. However, it necessitates the added cost and complexity of eye-tracking hardware and the associated software stack, including latency compensation. FFR provides a significant portion of the benefits with much lower implementation overhead and cost. | DFR offers the potential for greater performance gains and a more consistently high-quality visual experience because the peripheral region can often be reduced in quality more aggressively. However, it necessitates the added cost and complexity of eye-tracking hardware and the associated software stack, including latency compensation. FFR provides a significant portion of the benefits with much lower implementation overhead and cost. | ||
== Hardware, Software Support, and Applications == | ==Hardware, Software Support, and Applications== | ||
FFR is widely supported and utilized across various VR/AR platforms and applications, particularly where performance optimization is critical: | FFR is widely supported and utilized across various VR/AR platforms and applications, particularly where performance optimization is critical: | ||
=== Standalone VR Headsets === | ===Standalone VR Headsets=== | ||
Devices like the [[Meta Quest]], Quest 2, [[Meta Quest 3]], [[Pico Neo 3 Link|Pico Neo 3]], [[Pico 4]], and [[HTC Vive Focus]] series heavily rely on FFR (often using VRS via their mobile chipsets like the [[Snapdragon XR2]]) to achieve acceptable performance and battery life.<ref name="MetaFFR"/><ref name="Carmack2018">Carmack, J. (2018). Oculus Connect 5 Keynote. Oculus Connect Conference.</ref> Oculus/Meta introduced FFR support starting with the [[Oculus Go]].<ref name="James2018Go">James, Paul. (2018-03-21). "Oculus Go to Offer Fixed Foveated Rendering and Up to 72Hz Refresh Rate". Road to VR. Retrieved April 25, 2025.</ref> | Devices like the [[Meta Quest]], Quest 2, [[Meta Quest 3]], [[Pico Neo 3 Link|Pico Neo 3]], [[Pico 4]], and [[HTC Vive Focus]] series heavily rely on FFR (often using VRS via their mobile chipsets like the [[Snapdragon XR2]]) to achieve acceptable performance and battery life.<ref name="MetaFFR"/><ref name="Carmack2018">Carmack, J. (2018). Oculus Connect 5 Keynote. Oculus Connect Conference.</ref> Oculus/Meta introduced FFR support starting with the [[Oculus Go]].<ref name="James2018Go">James, Paul. (2018-03-21). "Oculus Go to Offer Fixed Foveated Rendering and Up to 72Hz Refresh Rate". Road to VR. Retrieved April 25, 2025.</ref> | ||
=== Console and PC VR === | ===Console and PC VR=== | ||
While high-end PCs have more rendering power, FFR can still be beneficial for pushing higher resolutions or graphical settings. Support is often provided through [[GPU]] drivers and VR runtimes like [[SteamVR]] and the [[Oculus PC SDK]]. Some console VR implementations, like select titles on [[PlayStation VR]] (especially with [[PlayStation 4 Pro]]), utilized forms of FFR.<ref name="Devine2018PSVR">Devine, R. (2018). PlayStation VR Performance Enhancements. SIGGRAPH 2018.</ref> Modern PC GPUs supporting VRS enable efficient FFR implementations. | While high-end PCs have more rendering power, FFR can still be beneficial for pushing higher resolutions or graphical settings. Support is often provided through [[GPU]] drivers and VR runtimes like [[SteamVR]] and the [[Oculus PC SDK]]. Some console VR implementations, like select titles on [[PlayStation VR]] (especially with [[PlayStation 4 Pro]]), utilized forms of FFR.<ref name="Devine2018PSVR">Devine, R. (2018). PlayStation VR Performance Enhancements. SIGGRAPH 2018.</ref> Modern PC GPUs supporting VRS enable efficient FFR implementations. | ||
=== Mobile VR/AR Platforms === | ===Mobile VR/AR Platforms=== | ||
Mobile platforms like the former [[Google Daydream]] utilized FFR.<ref name="GoogleDaydream2018">Google VR (2018). Daydream View Technical Documentation.</ref> Mobile AR platforms and SDKs such as [[ARCore]] (Google) and [[ARKit]] (Apple) incorporate optimization techniques that can include or are analogous to FFR to manage performance and power consumption on smartphones and tablets.<ref name="AppleARKit2019">Apple Developer. (2019). Metal Performance Shaders for Augmented Reality. WWDC 2019.</ref> | Mobile platforms like the former [[Google Daydream]] utilized FFR.<ref name="GoogleDaydream2018">Google VR (2018). Daydream View Technical Documentation.</ref> Mobile AR platforms and SDKs such as [[ARCore]] (Google) and [[ARKit]] (Apple) incorporate optimization techniques that can include or are analogous to FFR to manage performance and power consumption on smartphones and tablets.<ref name="AppleARKit2019">Apple Developer. (2019). Metal Performance Shaders for Augmented Reality. WWDC 2019.</ref> | ||
=== Game Engines === | ===Game Engines=== | ||
Major engines like [[Unreal Engine]] and [[Unity (game engine)|Unity]] provide built-in support or plugins for implementing FFR, often leveraging underlying VRS capabilities from graphics APIs.<ref name="UnrealVRS">Unreal Engine Documentation. "Variable Rate Shading (VRS)". Retrieved April 25, 2025.</ref><ref name="UnityVRS">Unity Documentation. "Unity XR - Foveated rendering". Retrieved April 25, 2025.</ref> | Major engines like [[Unreal Engine]] and [[Unity (game engine)|Unity]] provide built-in support or plugins for implementing FFR, often leveraging underlying VRS capabilities from graphics APIs.<ref name="UnrealVRS">Unreal Engine Documentation. "Variable Rate Shading (VRS)". Retrieved April 25, 2025.</ref><ref name="UnityVRS">Unity Documentation. "Unity XR - Foveated rendering". Retrieved April 25, 2025.</ref> | ||
=== Graphics APIs === | ===Graphics APIs=== | ||
Modern APIs are crucial enablers. [[DirectX 12 Ultimate]] (Variable Rate Shading API), [[Vulkan]] (VK_KHR_fragment_shading_rate, VK_NV_shading_rate_image extensions), [[Metal (API)|Metal]] (Tile Shading Rasterization Rate Maps), and [[OpenGL]] (via extensions) provide the necessary functionality for efficient FFR implementation.<ref name="MicrosoftVRS"/><ref name="KhronosVRS"/><ref name="KhronosNVExt">Khronos Group. (2020). Vulkan Specification: VK_NV_shading_rate_image extension.</ref> | Modern APIs are crucial enablers. [[DirectX 12 Ultimate]] (Variable Rate Shading API), [[Vulkan]] (VK_KHR_fragment_shading_rate, VK_NV_shading_rate_image extensions), [[Metal (API)|Metal]] (Tile Shading Rasterization Rate Maps), and [[OpenGL]] (via extensions) provide the necessary functionality for efficient FFR implementation.<ref name="MicrosoftVRS"/><ref name="KhronosVRS"/><ref name="KhronosNVExt">Khronos Group. (2020). Vulkan Specification: VK_NV_shading_rate_image extension.</ref> | ||
=== Primary Uses === | ===Primary Uses=== | ||
FFR is primarily used in [[Rendering (computer graphics)|rendering]] real-time [[Computer graphics|graphics]] for: | FFR is primarily used in [[Rendering (computer graphics)|rendering]] real-time [[Computer graphics|graphics]] for: | ||
* [[Virtual Reality]] games and applications | * [[Virtual Reality]] games and applications | ||
| Line 114: | Line 112: | ||
* [[3D rendering]] applications where performance on constrained hardware (like mobile devices) is critical. | * [[3D rendering]] applications where performance on constrained hardware (like mobile devices) is critical. | ||
== Future Developments == | ==Future Developments== | ||
Research and development continue to refine foveated rendering techniques: | Research and development continue to refine foveated rendering techniques: | ||
* '''Hybrid Approaches:''' Systems are being explored that combine fixed and dynamic foveated rendering, potentially using eye tracking when available but falling back to fixed methods when tracking is lost or unavailable, or using fixed foveation as a baseline further enhanced by eye tracking.<ref name="Langbehn2020">Langbehn, E., Rausch, D., Steinicke, F., & Bruder, G. (2020). Hybrid foveated rendering: combining progressive meshes and fixed foveated rendering for VR. ACM SIGGRAPH 2020 Posters, 1-2.</ref> | * '''Hybrid Approaches:''' Systems are being explored that combine fixed and dynamic foveated rendering, potentially using eye tracking when available but falling back to fixed methods when tracking is lost or unavailable, or using fixed foveation as a baseline further enhanced by eye tracking.<ref name="Langbehn2020">Langbehn, E., Rausch, D., Steinicke, F., & Bruder, G. (2020). Hybrid foveated rendering: combining progressive meshes and fixed foveated rendering for VR. ACM SIGGRAPH 2020 Posters, 1-2.</ref> | ||
| Line 123: | Line 121: | ||
* '''Content-Aware Foveation:''' Future FFR might adapt the foveation pattern based on real-time content analysis, potentially emphasizing important visual elements (like text or faces) even if they are slightly off-center, without needing full eye tracking.<ref name="Lungaro2018">Lungaro, P., Sjöberg, R., Valero, A. J. F., Mittal, A., & Tollmar, K. (2018). Gaze-aware streaming solutions for the next generation of mobile VR experiences. IEEE Transactions on Visualization and Computer Graphics, 24(4), 1535-1544.</ref> | * '''Content-Aware Foveation:''' Future FFR might adapt the foveation pattern based on real-time content analysis, potentially emphasizing important visual elements (like text or faces) even if they are slightly off-center, without needing full eye tracking.<ref name="Lungaro2018">Lungaro, P., Sjöberg, R., Valero, A. J. F., Mittal, A., & Tollmar, K. (2018). Gaze-aware streaming solutions for the next generation of mobile VR experiences. IEEE Transactions on Visualization and Computer Graphics, 24(4), 1535-1544.</ref> | ||
== Implementation Considerations == | ==Implementation Considerations== | ||
When implementing FFR, developers should consider several factors to maximize benefits and minimize drawbacks: | When implementing FFR, developers should consider several factors to maximize benefits and minimize drawbacks: | ||
* '''[[Field of View]] (FoV) Coverage:''' Adjust foveation zone sizes and intensity based on the target headset's FoV. Wider FoV headsets might require different tuning than narrower ones. | * '''[[Field of View]] (FoV) Coverage:''' Adjust foveation zone sizes and intensity based on the target headset's FoV. Wider FoV headsets might require different tuning than narrower ones. | ||
| Line 131: | Line 129: | ||
* '''User Comfort and Testing:''' Balance performance gains against potential user discomfort, eyestrain, or noticeable artifacts. User testing is crucial to find acceptable trade-offs.<ref name="Bailey2009">Bailey, R., McNamara, A., Sudarsanam, N., & Grimm, C. (2009). Subtle gaze direction. ACM Transactions on Graphics, 28(4), 100:1-100:14.</ref> | * '''User Comfort and Testing:''' Balance performance gains against potential user discomfort, eyestrain, or noticeable artifacts. User testing is crucial to find acceptable trade-offs.<ref name="Bailey2009">Bailey, R., McNamara, A., Sudarsanam, N., & Grimm, C. (2009). Subtle gaze direction. ACM Transactions on Graphics, 28(4), 100:1-100:14.</ref> | ||
== See Also == | ==See Also== | ||
* [[Foveated rendering]] | * [[Foveated rendering]] | ||
* [[Dynamic Foveated Rendering]] | * [[Dynamic Foveated Rendering]] | ||
| Line 156: | Line 154: | ||
* [[Post-processing (video)|Post-processing]] | * [[Post-processing (video)|Post-processing]] | ||
== References == | ==References== | ||
<references /> | <references /> | ||