|
|
Line 1: |
Line 1: |
| {{see also|Terms|Technical Terms}} | | {{see also|Terms|Technical Terms}} |
| [[Fixed foveated rendering]] ('''FFR''') is a performance optimization technique used primarily in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR) applications. It leverages the characteristics of the [[human visual system]] – specifically the high acuity in the center of vision ([[fovea]]) and lower acuity in the [[peripheral vision]] – to reduce the rendering workload on the [[Graphics Processing Unit]] (GPU) without significantly impacting the perceived visual quality for the user.<ref name="QualcommFFRDef">Qualcomm Developer Network. "Foveated Rendering". Retrieved April 25, 2025.</ref><ref name="MetaFFR">Meta Quest Developer Center. "Foveated Rendering". Retrieved April 25, 2025.</ref> Unlike [[Dynamic Foveated Rendering]] (DFR), which uses [[eye tracking]] to dynamically adjust the high-resolution area based on the user's gaze, FFR renders a *fixed* area, typically the center of the display, at the highest quality, while progressively reducing the rendering quality towards the edges (periphery).<ref name="Patney2016">Patney, A., Salvi, M., Kim, J., Kaplanyan, A., Wyman, C., Benty, N., Luebke, D., & Lefohn, A. (2016). Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics, 35(6), 179:1–179:12.</ref> This technique significantly reduces the [[computational load]] on the GPU by allocating rendering resources more efficiently, leading to potential improvements in performance, higher [[frame rate]]s, reduced [[latency (engineering)|latency]], and extended battery life in mobile VR/AR systems.<ref name="Bastani2017">Bastani, B., Turner, E., Vieri, C., Jiang, H., Funt, B., & Balram, N. (2017). Foveated pipeline for AR/VR head-mounted displays. Information Display, 33(6), 14-35.</ref> | | [[Fixed foveated rendering]] ('''FFR''') is a performance–optimisation technique used primarily in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR). It exploits the characteristics of the [[Human visual system|human visual system]]—very high acuity in the [[Fovea]] and much lower acuity in the [[Peripheral vision|periphery]]—to lower the rendering workload on a [[Graphics processing unit|GPU]] without noticeably degrading perceived image quality.<ref name="QualcommFFRDef">{{cite web |title=The Evolution of High-Performance Foveated Rendering on Adreno |url=https://www.qualcomm.com/developer/blog/2021/07/evolution-high-performance-foveated-rendering-adreno |website=Qualcomm Developer Network |date=7 July 2021 |access-date=25 April 2025}}</ref><ref name="MetaFFR">{{cite web |title=Save GPU with Eye-Tracked Foveated Rendering |url=https://developers.meta.com/horizon/blog/save-gpu-with-eye-tracked-foveated-rendering/ |website=Meta Quest Developer Blog |date=19 Oct 2022 |access-date=25 April 2025}}</ref> |
|
| |
|
| ==How it Works==
| | Unlike [[Dynamic foveated rendering]] (DFR)—which relies on [[Eye tracking]] to move the high-quality region to the user’s current gaze—**FFR keeps that region fixed**, typically centred in the view, and progressively reduces resolution or shading rate towards the edges.<ref name="Patney2016">{{cite journal |last1=Patney |first1=A. |title=Towards foveated rendering for gaze-tracked virtual reality |journal=ACM Transactions on Graphics |volume=35 |issue=6 |pages=179:1–12 |year=2016 |doi=10.1145/2980179.2980246}}</ref> Because fragment shading is the dominant cost in modern high-resolution VR, reallocating it in this way can yield **34 – 65 % fragment-shader savings under typical loads**.<ref name="Jindal2021">{{cite journal |last1=Jindal |first1=A. |title=Perceptual model for adaptive local shading and refresh rate |journal=ACM Transactions on Graphics |volume=40 |issue=6 |pages=280:1–18 |year=2021 |doi=10.1145/3478513.3480514}}</ref> |
| The human eye does not perceive detail uniformly across its entire [[field of view]]. The [[fovea]], a small pit in the [[retina]], is densely packed with [[cone cell]]s and responsible for sharp central vision (e.g., reading, focusing on details). Visual acuity is highest in this central region, covering only about 1-2 degrees of the [[visual field]], but it drops significantly in the surrounding areas. Outside this central area, the density of cones decreases, and [[peripheral vision]], while more sensitive to motion and contrast, has much lower spatial resolution (acuity).<ref name="HumanVisionBasics">Webvision. "Facts and Figures Concerning the Human Retina". University of Utah. Retrieved April 25, 2025.</ref><ref name="Strasburger2011">Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11(5), 13.</ref>
| |
|
| |
|
| Fixed Foveated Rendering exploits this biological characteristic. The [[rendering pipeline]] is instructed to render the central region of the image (where the user is assumed to be looking most of the time, typically straight ahead) at full resolution or shading quality. Surrounding this central high-quality region, one or more concentric "rings" or areas are rendered at progressively lower quality.<ref name="NvidiaVRSFFR">NVIDIA Developer Blog. (2019-03-18). "Variable Rate Shading: A Volumetric Exploration". Retrieved April 25, 2025.</ref>
| | == How it works == |
| | Visual acuity falls off steeply outside the central ≈1 – 2° of vision.<ref name="Strasburger2011">{{cite journal |last1=Strasburger |first1=H. |title=Peripheral vision and pattern recognition: a review |journal=Journal of Vision |volume=11 |issue=5 |pages=13 |year=2011 |doi=10.1167/11.5.13}}</ref> FFR divides the render target into concentric zones: |
|
| |
|
| ===Resolution Zones===
| | * '''Foveal zone''' – full resolution / 1× shading |
| FFR typically divides the [[viewport]] into 2-4 distinct zones, rendered at different quality levels:
| | * '''Parafoveal zone''' – moderate reduction (e.g., ½×) |
| * '''Foveal zone:''' The central area rendered at full resolution (e.g., 1x shading rate). | | * '''Peripheral zone''' – stronger reduction (e.g., ⅛× or 1⁄16×) |
| * '''Parafoveal zone:''' An intermediate area surrounding the center, rendered with moderate resolution or shading reduction (e.g., 1/2x or 1/4x rate). | | * (optional) '''Far-peripheral zone''' |
| * '''Peripheral zone:''' The outer area rendered with significant resolution or shading reduction (e.g., 1/8x or 1/16x rate). | |
| * '''Far peripheral zone:''' (Sometimes implemented) The extreme edges rendered at the lowest quality. | |
|
| |
|
| The boundaries between these zones and the specific resolution or shading reduction factors are carefully tuned by developers to balance performance gains against potential visual artifacts.<ref name="Meng2020">Meng, X., Du, R., Varshney, A. (2020). Eye-dominance-guided Foveated Rendering. IEEE Transactions on Visualization and Computer Graphics, 26(5), 1972-1980.</ref> The key aspect of FFR is that these regions are *fixed* relative to the display or viewport, assuming the user's fovea is generally aligned with the center. | | The exact boundaries are tuned per title and headset.<ref name="Meng2020">{{cite journal |last1=Meng |first1=X. |title=Eye-dominance-guided Foveated Rendering |journal=IEEE TVCG |volume=26 |issue=5 |pages=1972–80 |year=2020 |doi=10.1109/TVCG.2020.2973053}}</ref> |
|
| |
|
| ==Technical Implementation== | | == Technical implementation == |
| FFR is typically implemented using specific [[GPU]] features and graphics [[API]]s. Several technical approaches exist:
| |
|
| |
|
| ===[[Variable Rate Shading]] (VRS)=== | | === Variable Rate Shading (VRS) === |
| This is the most common and efficient modern technique. Modern GPUs from [[NVIDIA]] (Turing architecture onwards), [[AMD]] (RDNA architecture onwards), and mobile GPU vendors like [[Qualcomm]] (Adreno) support VRS.<ref name="NvidiaVRSFFR"/><ref name="AMDRDNA2">AMD. "AMD RDNA™ 2 Architecture". Retrieved April 25, 2025.</ref><ref name="QualcommFFRDef"/> VRS allows developers to specify the rate at which pixel shaders (also known as [[fragment shader]]s) are executed for different regions of the screen. In the peripheral zones defined by FFR, a single shader operation can be used to color a larger block of pixels (e.g., 2x2, 4x4), significantly reducing the computational load.<ref name="MicrosoftVRS">Microsoft DirectX Dev Blog. (2019-03-19). "DirectX 12 Variable Rate Shading". Retrieved April 25, 2025.</ref><ref name="KhronosVRS">Khronos Group. "Vulkan API Specification - VK_KHR_fragment_shading_rate". Retrieved April 25, 2025.</ref> APIs like [[DirectX 12 Ultimate]], [[Vulkan]], and [[OpenGL]] extensions provide interfaces for VRS.<ref name="MicrosoftVRS"/><ref name="KhronosVRS"/> Developers specify a "shading rate map" or use geometric primitives to define which parts of the screen receive full or reduced shading effort. FFR often uses a simple, static map favouring the center.<ref name="Vaidyanathan2014">Vaidyanathan, K., Salvi, M., Toth, R., Foley, T., Akenine-Möller, T., Nilsson, J., Munkberg, J., Hasselgren, J., Sugihara, M., Clarberg, P., & Lefohn, A. (2014). Coarse pixel shading. In High Performance Graphics, 9-18.</ref>
| | Modern GPUs expose hardware VRS so developers can supply a *shading-rate map* that matches the FFR zones. |
| | * NVIDIA first delivered VRS in Turing GPUs.<ref name="NvidiaVRS">{{cite web |title=Variable Rate Shading (VRS) |url=https://developer.nvidia.com/vrworks/graphics/variablerateshading |website=NVIDIA Developer |access-date=25 Apr 2025}}</ref> |
| | * AMD introduced it in **RDNA 2** (RX 6000 / Radeon PRO W6000) and later cards.<ref name="AMD_RDNA2">{{cite web |title=A Foundation for High-Performing Graphics – AMD RDNA 2 Architecture |url=https://www.amd.com/content/dam/amd/en/documents/products/graphics/workstation/rdna2-explained-radeon-pro-W6000.pdf |website=AMD |year=2021 |access-date=25 Apr 2025}}</ref> |
| | * Mobile Adreno GPUs provide VRS through Vulkan extensions.<ref name="QualcommFFRDef"/> |
|
| |
|
| ===Multi-Resolution Rendering=== | | API support is available in [[DirectX 12 Ultimate]]<ref name="MicrosoftVRS">{{cite web |title=Variable Rate Shading – a scalpel in a world of sledgehammers |url=https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/ |website=Microsoft DirectX Dev Blog |date=18 Mar 2019 |access-date=25 Apr 2025}}</ref>, [[Vulkan]] ‘‘VK_KHR_fragment_shading_rate’’<ref name="KhronosVRS">{{cite web |title=VK_KHR_fragment_shading_rate |url=https://registry.khronos.org/vulkan/specs/latest/man/html/VK_KHR_fragment_shading_rate.html |website=Khronos Registry |access-date=25 Apr 2025}}</ref> and Apple’s [[Metal (API)|Metal]] *Rasterization Rate Maps*. |
| This technique involves rendering peripheral areas to lower-resolution intermediate [[render target]]s and then upscaling and combining them with the high-resolution center render. The display is divided into regions that are rendered to separate render targets at different resolutions, which are then composited. This approach reduces both shading and geometry processing load and can offer significant performance benefits but may exhibit more noticeable boundaries or transitions between zones if not carefully implemented.<ref name="Swafford2016">Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016). User, metric, and computational evaluation of foveated rendering methods. Proceedings of the ACM Symposium on Applied Perception, 7-14.</ref>
| |
|
| |
|
| ===Lens-Matched Shading=== | | === Multi-resolution & lens-matched shading === |
| A specialized form of foveated rendering primarily for VR that accounts for the inherent distortion introduced by [[VR headset]] lenses. Since VR lenses naturally compress the pixel density towards the periphery when projecting onto the curved lens surface, lens-matched shading aligns the foveation pattern (reducing shading rate or resolution) with this optical distortion. This synergy avoids redundant rendering effort in areas already optically compressed, maximizing efficiency.<ref name="Vlachos2016">Vlachos, A. (2016). Advanced VR Rendering. Game Developers Conference 2016.</ref>
| | An earlier vendor-specific approach rendered peripheral tiles to lower-resolution targets and then re-assembled them (NVIDIA *Multi-Res Shading*).<ref name="NvidiaMRS">{{cite web |title=Multi-Res Shading |url=https://developer.nvidia.com/vrworks/graphics/multiresshading |website=NVIDIA Developer |access-date=25 Apr 2025}}</ref> |
| | Valve’s ‘‘Lens-Matched Shading’’ aligns the shading pattern with optical distortion, saving additional work.<ref name="Vlachos2016">{{cite web |last=Vlachos |first=Alex |title=Advanced VR Rendering Performance (slides) |url=https://media.steampowered.com/apps/valve/2016/Alex_Vlachos_Advanced_VR_Rendering_Performance_GDC2016.pdf |website=GDC 2016 |access-date=25 Apr 2025}}</ref> |
|
| |
|
| Developers often have control over the intensity of FFR, typically offering profiles (e.g., "Low", "Medium", "High") which adjust the size of the high-quality central region and the degree of quality reduction in the periphery.<ref name="MetaFFR"/> This allows balancing performance gains against potential visual artifacts.
| | == Advantages == |
| | * **Performance:** measured shader-time reductions of 34 – 65 %.<ref name="Jindal2021"/> |
| | * **Lower power / heat:** critical on battery-powered headsets.<ref name="QualcommFFRDef"/> |
| | * **No eye-tracking hardware required.** |
|
| |
|
| ==Perceptual Considerations and Artifacts== | | == Limitations == |
| ===Visual Acuity Distribution=== | | * The high-quality region is **fixed**; users who glance sideways may notice blur or aliasing.<ref name="UploadVRFFR">{{cite web |title=What Is Foveated Rendering For VR & How Does It Work? |url=https://uploadvr.com/foveated-rendering-explainer/ |website=UploadVR |date=15 Jul 2022 |access-date=25 Apr 2025}}</ref> |
| As mentioned, human visual acuity varies dramatically across the visual field. The fovea covers only about 1-2 degrees but contains the highest concentration of cone cells. Acuity drops significantly within a few degrees from the center and continues to decrease toward the periphery.<ref name="Strasburger2011"/>
| | * Aggressive settings can introduce shimmering or visible zone boundaries.<ref name="Tursun2019">{{cite journal |last1=Tursun |first1=O.T. |title=Luminance-Contrast-Aware Foveated Rendering |journal=ACM TOG |volume=38 |issue=4 |pages=98:1–14 |year=2019}}</ref> |
|
| |
|
| ===Contrast Sensitivity and Motion Detection=== | | == Platform & engine support == |
| While resolution sensitivity decreases in peripheral vision, other forms of visual sensitivity, particularly to motion and high contrast, remain relatively effective in the periphery. Effective FFR implementations must account for these factors to avoid creating easily noticeable artifacts, especially during movement.<ref name="Tursun2019">Tursun, O. T., Arabadzhiyska-Koleva, E., Wernikowski, M., Mantiuk, R., Seidel, H. P., Myszkowski, K., & Didyk, P. (2019). Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics, 38(4), 98:1-98:14.</ref>
| | * **Standalone VR:** Meta Quest (Go → Quest 3) ships user-selectable FFR levels.<ref name="RoadToVRGo">{{cite web |title=Oculus Go to Offer Fixed Foveated Rendering and up to 72 Hz Refresh Rate |url=https://www.roadtovr.com/gdc-2018-oculus-go-to-support-fixed-foveation-and-up-to-72hz-refresh/ |website=Road to VR |date=21 Mar 2018 |access-date=25 Apr 2025}}</ref> |
| | | * **Game engines:** built-in toggles exist in [[Unity (game engine)|Unity]]<ref name="UnityFoveated">{{cite web |title=Foveated rendering (Unity Manual) |url=https://docs.unity3d.com/6000.0/Documentation/Manual/xr-foveated-rendering.html |access-date=25 Apr 2025}}</ref> (including OpenXR plug-in<ref name="UnityOpenXR">{{cite web |title=Foveated rendering in OpenXR |url=https://docs.unity3d.com/Packages/com.unity.xr.openxr@1.13/manual/features/foveatedrendering.html |access-date=25 Apr 2025}}</ref>) and [[Unreal Engine]] 4.27+.<ref name="UnrealVRS">{{cite web |title=VR Performance Features | UE 4.27 Documentation |url=https://docs.unrealengine.com/4.27/en-US/SharingAndReleasing/XRDevelopment/VR/VRPerformanceAndProfiling/PerformanceFeatures |access-date=25 Apr 2025}}</ref> |
| ===Potential Artifacts===
| | * **Consoles & PC VR:** No system-level FFR exists on the original PlayStation VR, but some titles (e.g., ''Farpoint'') implemented custom multi-resolution shading.<ref name="FarpointTech">{{cite web |title=A Look Inside Farpoint’s Rendering Techniques for VR |url=https://www.impulsegear.com/a-look-inside-farpoints-rendering-techniques-for-vr |website=Impulse Gear Dev Blog |date=4 June 2018 |access-date=25 Apr 2025}}</ref> |
| Depending on the aggressiveness of the FFR settings and the implementation technique, users might perceive artifacts, especially if looking off-center. Common artifacts include:
| |
| * '''Shimmering:''' Temporal instability or aliasing in the lower-resolution regions. | |
| * '''[[Aliasing]]:''' Jagged edges, particularly noticeable in high-contrast areas in the periphery. | |
| * '''Blurring:''' A general loss of detail in peripheral vision beyond what is naturally perceived. | |
| * '''Border Visibility:''' Noticeable transitions or lines between the different resolution/shading zones. | |
| * '''Blockiness:''' Visible pixel blocks if very low shading rates (e.g., 4x4) are used. | |
| | |
| Techniques such as [[temporal anti-aliasing]] (TAA), careful tuning of zone transitions, adaptive contrast preservation, and ensuring compatibility with [[post-processing (video)|post-processing]] effects help minimize these artifacts.<ref name="Albert2017">Albert, R., Patney, A., Luebke, D., & Kim, J. (2017). Latency requirements for foveated rendering in virtual reality. ACM Transactions on Applied Perception, 14(4), 25:1-25:13.</ref>
| |
| | |
| ==Advantages and Disadvantages==
| |
| ===Advantages===
| |
| * '''Improved Performance:''' The primary benefit is a significant reduction in the GPU workload, particularly fragment shading costs, which are often a bottleneck in high-resolution VR/AR rendering. This leads to higher and more stable [[frame rate]]s.<ref name="QualcommFFRDef"/> Consistent frame rates are crucial for VR to prevent [[motion sickness]]. Performance gains can range from 30-60% depending on the implementation and intensity.<ref name="Kim2019Perf">Kim, J., Toma, S., & Varshney, A. (2019). Perceptual model for adaptive local shading and refresh rate. IEEE VR 2019.</ref>
| |
| * '''Increased Visual Fidelity Potential:''' By saving GPU resources in the periphery, developers can potentially increase the baseline rendering resolution or enable more demanding graphical effects in the central, high-quality region. | |
| * '''Reduced Power Consumption & Heat:''' Less GPU computation translates directly to lower power draw and reduced thermal load. This is especially important for battery-powered standalone VR headsets (e.g., [[Meta Quest]] series) and AR glasses, extending usage time and improving user comfort.<ref name="MetaFFR"/> | |
| * '''Simpler Implementation (vs. DFR):''' FFR does not require eye-tracking hardware or the complex software integration associated with it (like low-latency gaze prediction). This makes it easier and cheaper to implement on a wider range of devices.
| |
| * '''Compatibility:''' Can often be integrated into existing [[rendering pipeline]]s without complete rewrites, especially with modern API support for VRS.
| |
| | |
| ===Disadvantages and Limitations===
| |
| * '''Fixed High-Quality Region:''' The main limitation is that the high-resolution area doesn't follow the user's actual gaze. If the user looks towards the periphery using their eyes (without turning their head), they may notice the lower resolution or shading quality, which can be distracting or break immersion.<ref name="UploadVRFFRvsDFR">UploadVR. (2022-07-15). "What Is Foveated Rendering For VR & How Does It Work?". Retrieved April 25, 2025.</ref>
| |
| * '''Potential Artifacts:''' As mentioned previously, aggressive FFR settings can lead to noticeable visual artifacts (blurring, shimmering, borders), especially in the transition zones or during fast motion.
| |
| * '''Suboptimal Efficiency Compared to DFR:''' Since the high-resolution area doesn't follow the user's gaze precisely, resources may still be spent rendering peripheral content at higher quality than necessary if the user is looking centrally, or rendering the center at high quality when the user looks away. DFR can potentially use a smaller high-quality region more effectively.
| |
| * '''Content-Dependent Effectiveness:''' FFR works best for experiences that naturally direct the user's attention towards the center of the screen. It might be less suitable or require careful tuning for applications with critical information displayed in the periphery.
| |
| * '''Tuning Required:''' The optimal FFR settings (zone sizes, quality levels) can vary significantly between applications, headsets (due to FoV and lens differences), and user sensitivity. Developers need to carefully tune these parameters.
| |
| | |
| ==Comparison to Dynamic Foveated Rendering== | |
| [[Dynamic Foveated Rendering]] (DFR), also known as Gaze-Contingent Rendering, uses integrated eye-tracking sensors within the VR/AR device to precisely determine where the user is looking in real-time. The high-quality rendering region is then dynamically moved to match the user's gaze position frame by frame.<ref name="TobiiDFR">Tobii. "Foveated rendering". Retrieved April 25, 2025.</ref> FFR is often compared to DFR:
| |
|
| |
|
| | == Comparison with Dynamic Foveated Rendering == |
| {| class="wikitable" | | {| class="wikitable" |
| |+ Comparison between Fixed and Dynamic Foveated Rendering
| | ! Feature !! Fixed foveated rendering !! Dynamic foveated rendering |
| ! Feature ! Fixed Foveated Rendering (FFR) ! Dynamic Foveated Rendering (DFR) | |
| |-
| |
| | '''Mechanism''' | High quality in fixed center region | High quality follows user's gaze
| |
| |-
| |
| | '''[[Eye Tracking]]''' | Not required | Required
| |
| |-
| |
| | '''Hardware Req.''' | Standard VR/AR display, capable GPU | Eye-tracking sensors, capable GPU, low-latency processing
| |
| |-
| |
| | '''Complexity''' | Lower (implementation & runtime) | Higher (hardware integration, calibration, gaze prediction, software stack)
| |
| |- | | |- |
| | '''Potential Gains'''| Good (e.g., 30-60% workload reduction)<ref name="Kim2019Perf"/> | Potentially higher (e.g., up to 70-80% reduction)<ref name="Kim2021Compare">Kim, A., Reipschläger, P., & Dachselt, R. (2021). Comparative evaluation of foveated rendering techniques in virtual reality. IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 263-272.</ref> (Allows more aggressive periphery reduction) | | | Eye-tracking required || No || Yes |
| |- | | |- |
| | '''User Experience'''| Can perceive lower quality if looking off-center | High quality generally always where user looks (if tracking is accurate & low latency) | | | Typical shader savings || 34–65 %<ref name="Jindal2021"/> || Up to ≈70 %<ref name="Patney2016"/> |
| |- | | |- |
| | '''Device Cost''' | Lower impact | Increases device cost (sensors, processing) | | | User may notice blur if looking off-centre || Yes || No (if tracking latency is low) |
| |- | | |- |
| | '''Adoption''' | Widespread (especially mobile/standalone) | Growing, primarily in higher-end headsets | | | Hardware cost || Lower || Higher |
| |} | | |} |
|
| |
|
| DFR offers the potential for greater performance gains and a more consistently high-quality visual experience because the peripheral region can often be reduced in quality more aggressively. However, it necessitates the added cost and complexity of eye-tracking hardware and the associated software stack, including latency compensation. FFR provides a significant portion of the benefits with much lower implementation overhead and cost.
| | == Future research == |
| | | * **Hybrid FFR + DFR** schemes that fall back to fixed maps when gaze data is unreliable.<ref name="Langbehn2020">{{cite conference |last1=Langbehn |first1=E. |title=Hybrid Foveated Rendering: Combining Progressive Meshes and Fixed Foveated Rendering for VR |booktitle=SIGGRAPH Posters |year=2020}}</ref> |
| ==Hardware, Software Support, and Applications== | | * **Neural reconstruction** (e.g., DeepFovea) that regenerates fine detail in the low-resolution periphery.<ref name="DeepFovea2019">{{cite journal |last1=Kaplanyan |first1=A. |title=DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression |journal=ACM TOG |volume=38 |issue=6 |pages=212:1–13 |year=2019 |doi=10.1145/3355089.3356557}}</ref> |
| FFR is widely supported and utilized across various VR/AR platforms and applications, particularly where performance optimization is critical:
| |
| | |
| ===Standalone VR Headsets===
| |
| Devices like the [[Meta Quest]], Quest 2, [[Meta Quest 3]], [[Pico Neo 3 Link|Pico Neo 3]], [[Pico 4]], and [[HTC Vive Focus]] series heavily rely on FFR (often using VRS via their mobile chipsets like the [[Snapdragon XR2]]) to achieve acceptable performance and battery life.<ref name="MetaFFR"/><ref name="Carmack2018">Carmack, J. (2018). Oculus Connect 5 Keynote. Oculus Connect Conference.</ref> Oculus/Meta introduced FFR support starting with the [[Oculus Go]].<ref name="James2018Go">James, Paul. (2018-03-21). "Oculus Go to Offer Fixed Foveated Rendering and Up to 72Hz Refresh Rate". Road to VR. Retrieved April 25, 2025.</ref>
| |
| | |
| ===Console and PC VR===
| |
| While high-end PCs have more rendering power, FFR can still be beneficial for pushing higher resolutions or graphical settings. Support is often provided through [[GPU]] drivers and VR runtimes like [[SteamVR]] and the [[Oculus PC SDK]]. Some console VR implementations, like select titles on [[PlayStation VR]] (especially with [[PlayStation 4 Pro]]), utilized forms of FFR.<ref name="Devine2018PSVR">Devine, R. (2018). PlayStation VR Performance Enhancements. SIGGRAPH 2018.</ref> Modern PC GPUs supporting VRS enable efficient FFR implementations.
| |
| | |
| ===Mobile VR/AR Platforms===
| |
| Mobile platforms like the former [[Google Daydream]] utilized FFR.<ref name="GoogleDaydream2018">Google VR (2018). Daydream View Technical Documentation.</ref> Mobile AR platforms and SDKs such as [[ARCore]] (Google) and [[ARKit]] (Apple) incorporate optimization techniques that can include or are analogous to FFR to manage performance and power consumption on smartphones and tablets.<ref name="AppleARKit2019">Apple Developer. (2019). Metal Performance Shaders for Augmented Reality. WWDC 2019.</ref>
| |
| | |
| ===Game Engines===
| |
| Major engines like [[Unreal Engine]] and [[Unity (game engine)|Unity]] provide built-in support or plugins for implementing FFR, often leveraging underlying VRS capabilities from graphics APIs.<ref name="UnrealVRS">Unreal Engine Documentation. "Variable Rate Shading (VRS)". Retrieved April 25, 2025.</ref><ref name="UnityVRS">Unity Documentation. "Unity XR - Foveated rendering". Retrieved April 25, 2025.</ref>
| |
| | |
| ===Graphics APIs===
| |
| Modern APIs are crucial enablers. [[DirectX 12 Ultimate]] (Variable Rate Shading API), [[Vulkan]] (VK_KHR_fragment_shading_rate, VK_NV_shading_rate_image extensions), [[Metal (API)|Metal]] (Tile Shading Rasterization Rate Maps), and [[OpenGL]] (via extensions) provide the necessary functionality for efficient FFR implementation.<ref name="MicrosoftVRS"/><ref name="KhronosVRS"/><ref name="KhronosNVExt">Khronos Group. (2020). Vulkan Specification: VK_NV_shading_rate_image extension.</ref>
| |
| | |
| ===Primary Uses===
| |
| FFR is primarily used in [[Rendering (computer graphics)|rendering]] real-time [[Computer graphics|graphics]] for:
| |
| * [[Virtual Reality]] games and applications | |
| * [[Augmented Reality]] overlays and experiences | |
| * [[3D rendering]] applications where performance on constrained hardware (like mobile devices) is critical. | |
| | |
| ==Future Developments==
| |
| Research and development continue to refine foveated rendering techniques:
| |
| * '''Hybrid Approaches:''' Systems are being explored that combine fixed and dynamic foveated rendering, potentially using eye tracking when available but falling back to fixed methods when tracking is lost or unavailable, or using fixed foveation as a baseline further enhanced by eye tracking.<ref name="Langbehn2020">Langbehn, E., Rausch, D., Steinicke, F., & Bruder, G. (2020). Hybrid foveated rendering: combining progressive meshes and fixed foveated rendering for VR. ACM SIGGRAPH 2020 Posters, 1-2.</ref> | |
| * '''[[Machine Learning]] Optimization:''' [[Machine learning]] and [[Deep learning|deep learning]] models are being investigated to: | |
| ** Optimize foveation patterns based on scene content or saliency. | |
| ** Potentially predict gaze direction or regions of interest without dedicated eye tracking. | |
| ** Improve image reconstruction and artifact reduction in lower-resolution peripheral areas.<ref name="Kaplanyan2019">Kaplanyan, A. S., Sochenov, A., Leimkühler, T., Okunev, M., Goodall, T., & Rufo, G. (2019). DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos. ACM Transactions on Graphics, 38(6), 212:1-212:13.</ref>
| |
| * '''Content-Aware Foveation:''' Future FFR might adapt the foveation pattern based on real-time content analysis, potentially emphasizing important visual elements (like text or faces) even if they are slightly off-center, without needing full eye tracking.<ref name="Lungaro2018">Lungaro, P., Sjöberg, R., Valero, A. J. F., Mittal, A., & Tollmar, K. (2018). Gaze-aware streaming solutions for the next generation of mobile VR experiences. IEEE Transactions on Visualization and Computer Graphics, 24(4), 1535-1544.</ref>
| |
| | |
| ==Implementation Considerations==
| |
| When implementing FFR, developers should consider several factors to maximize benefits and minimize drawbacks:
| |
| * '''[[Field of View]] (FoV) Coverage:''' Adjust foveation zone sizes and intensity based on the target headset's FoV. Wider FoV headsets might require different tuning than narrower ones.
| |
| * '''Zone Boundaries:''' Utilize smooth transitions (e.g., using VRS features that allow gradual changes) between quality zones to reduce the visibility of borders.
| |
| * '''Content Awareness:''' Adapt the technique based on scene characteristics. For example, scenes with less peripheral detail might tolerate more aggressive FFR.
| |
| * '''[[Post-processing (video)|Post-Processing]] Compatibility:''' Ensure FFR integrates well with other rendering techniques, particularly [[temporal anti-aliasing]] (TAA) and other post-effects, which can sometimes interact poorly with sharp resolution changes.
| |
| * '''User Comfort and Testing:''' Balance performance gains against potential user discomfort, eyestrain, or noticeable artifacts. User testing is crucial to find acceptable trade-offs.<ref name="Bailey2009">Bailey, R., McNamara, A., Sudarsanam, N., & Grimm, C. (2009). Subtle gaze direction. ACM Transactions on Graphics, 28(4), 100:1-100:14.</ref>
| |
|
| |
|
| ==See Also== | | == See also == |
| * [[Foveated rendering]]
| | * [[Variable Rate Shading]] |
| * [[Dynamic Foveated Rendering]]
| |
| * [[Variable Rate Shading]] (VRS) | |
| * [[Eye tracking]] | | * [[Eye tracking]] |
| * [[Human visual system]]
| | * [[Virtual reality optimisation techniques]] |
| * [[Fovea]]
| |
| * [[Peripheral vision]]
| |
| * [[Virtual Reality]] (VR)
| |
| * [[Augmented Reality]] (AR)
| |
| * [[Graphics Processing Unit]] (GPU)
| |
| * [[Frame rate]]
| |
| * [[Latency (engineering)|Latency]]
| |
| * [[Rendering (computer graphics)|Rendering]]
| |
| * [[VR headset]]
| |
| * [[Virtual reality optimization techniques]] | |
| * [[Computer graphics]]
| |
| * [[Computational load]]
| |
| * [[Retina]]
| |
| * [[Cone cell]]
| |
| * [[Viewport]]
| |
| * [[Render target]]
| |
| * [[Temporal anti-aliasing]]
| |
| * [[Post-processing (video)|Post-processing]]
| |
|
| |
|
| ==References== | | == References == |
| <references /> | | <references /> |