Jump to content

Foveated rendering: Difference between revisions

No edit summary
No edit summary
 
(9 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{short description|Rendering technique that reduces computational workload by decreasing image quality in the periphery}}
{{about|the rendering technique|the imaging technique|Foveated imaging}}
{{use dmy dates|date=October 2025}}
{{Infobox technology
{{Infobox technology
| name        = Foveated rendering
| name        = Foveated rendering
Line 15: Line 11:
| related      = [[Eye tracking]], [[Variable rate shading]], [[Level of detail]]
| related      = [[Eye tracking]], [[Variable rate shading]], [[Level of detail]]
}}
}}
 
[[File:foveated rendering1.jpeg|350px|right]]
'''Foveated rendering''' is a [[computer graphics]] performance optimization technique that leverages the known properties of the [[human visual system]] (HVS) to reduce the computational workload on a [[Graphics processing unit|GPU]].<ref name="WikipediaFR">{{cite web |url=https://en.wikipedia.org/wiki/Foveated_rendering |title=Foveated rendering - Wikipedia}}</ref><ref name="UnityGlossary">{{cite web |url=https://unity.com/glossary/foveated-rendering |title=What is Foveated Rendering - Unity}}</ref> The technique is based on the biological fact that human [[visual acuity]] is not uniform across the [[visual field]]; it is highest in the very center of the gaze, a region known as the [[fovea]], and drops off sharply in the [[peripheral vision]].<ref name="UnityDocs">{{cite web |url=https://docs.unity3d.com/6000.2/Documentation/Manual/xr-foveated-rendering.html |title=Foveated rendering - Unity Manual}}</ref><ref name="UnityGlossaryDeep">{{cite web |url=https://unity.com/glossary/foveated-rendering |title=Foveated Rendering | Unity}}</ref>
'''Foveated rendering''' is a [[computer graphics]] performance optimization technique that leverages the known properties of the [[human visual system]] (HVS) to reduce the computational workload on a [[Graphics processing unit|GPU]].<ref name="WikipediaFR">{{cite web |url=https://en.wikipedia.org/wiki/Foveated_rendering |title=Foveated rendering - Wikipedia}}</ref><ref name="UnityGlossary">{{cite web |url=https://unity.com/glossary/foveated-rendering |title=What is Foveated Rendering - Unity}}</ref> The technique is based on the biological fact that human [[visual acuity]] is not uniform across the [[visual field]]; it is highest in the very center of the gaze, a region known as the [[fovea]], and drops off sharply in the [[peripheral vision]].<ref name="UnityDocs">{{cite web |url=https://docs.unity3d.com/6000.2/Documentation/Manual/xr-foveated-rendering.html |title=Foveated rendering - Unity Manual}}</ref><ref name="UnityGlossaryDeep">{{cite web |url=https://unity.com/glossary/foveated-rendering |title=Foveated Rendering | Unity}}</ref>


By rendering the area of the image that falls on the user's fovea at the highest [[resolution]] and progressively reducing the quality of the image in the periphery, foveated rendering can achieve significant performance gains with little to no perceptible loss in visual quality.<ref name="IntegrativeView">{{cite web |url=https://www.researchgate.net/publication/355503409_An_integrative_view_of_foveated_rendering |title=An integrative view of foveated rendering}}</ref><ref name="VarjoWhatIs">{{cite web |url=https://support.varjo.com/hc/en-us/what-is-foveated-rendering |title=What is foveated rendering? | Varjo Support}}</ref> This makes it a critical enabling technology for [[virtual reality]] (VR) and [[augmented reality]] (AR) [[head-mounted display]]s (HMDs), which must render high-resolution, stereoscopic images at very high [[frame rate]]s to provide a comfortable and immersive experience.<ref name="HVS_VR_Context">{{cite web |url=https://pmc.ncbi.nlm.nih.gov/articles/PMC10449001/ |title=Eye tracking in virtual reality: a comprehensive overview of the human visual system, eye movement types, and technical considerations}}</ref>
By rendering the area of the image that falls on the user's fovea at the highest [[resolution]] and progressively reducing the quality of the image in the periphery, foveated rendering can achieve significant performance gains with little to no perceptible loss in visual quality.<ref name="IntegrativeView">{{cite web |url=https://www.researchgate.net/publication/355503409_An_integrative_view_of_foveated_rendering |title=An integrative view of foveated rendering}}</ref><ref name="VarjoWhatIs">{{cite web |url=https://support.varjo.com/hc/en-us/what-is-foveated-rendering |title=What is foveated rendering? | Varjo Support}}</ref> This makes it a critical enabling technology for [[virtual reality]] (VR) and [[augmented reality]] (AR) [[head-mounted display]]s (HMDs), which must render high-resolution, stereoscopic images at very high [[frame rate]]s to provide a comfortable and immersive experience.<ref name="HVS_VR_Context">{{cite web |url=https://pmc.ncbi.nlm.nih.gov/articles/PMC10449001/ |title=Eye tracking in virtual reality: a comprehensive overview of the human visual system, eye movement types, and technical considerations}}</ref>


Implementations of foveated rendering are broadly categorized into two types: '''fixed foveated rendering''' (FFR), which assumes the user is always looking at the center of the screen, and '''dynamic (or eye-tracked) foveated rendering''' (ETFR or DFR), which uses integrated [[eye tracking]] hardware to update the high-quality region in real-time to match the user's gaze.<ref name="MetaETFRvsFFR">{{cite web |url=https://developers.meta.com/horizon/blog/save-gpu-with-eye-tracked-foveated-rendering/ |title=Save GPU with Eye Tracked Foveated Rendering}}</ref>
Implementations of foveated rendering are broadly categorized into two types: '''[[fixed foveated rendering]]''' (FFR), which assumes the user is always looking at the center of the screen, and '''[[dynamic (or eye-tracked) foveated rendering]]''' (ETFR or DFR), which uses integrated [[eye tracking]] hardware to update the high-quality region in real-time to match the user's gaze.<ref name="MetaETFRvsFFR">{{cite web |url=https://developers.meta.com/horizon/blog/save-gpu-with-eye-tracked-foveated-rendering/ |title=Save GPU with Eye Tracked Foveated Rendering}}</ref>


== Biological Foundation: The Human Visual System ==
== Biological Foundation: The Human Visual System ==
Line 61: Line 57:
# '''Display Update:''' The final, composited multi-resolution image is presented to the user.
# '''Display Update:''' The final, composited multi-resolution image is presented to the user.


This entire loop must be completed within the frame budget (e.g., under 11.1 ms for a 90 Hz display) to ensure a smooth experience.
This entire loop must be completed within the frame budget (for example under 11.1 ms for a 90 Hz display) to ensure a smooth experience.


=== Methods of Quality Reduction ===
=== Methods of Quality Reduction ===
Line 67: Line 63:
The term "reducing quality" encompasses several distinct techniques that can be applied to the peripheral regions to save computational power. These methods can be used individually or in combination:<ref name="IntegrativeView" />
The term "reducing quality" encompasses several distinct techniques that can be applied to the peripheral regions to save computational power. These methods can be used individually or in combination:<ref name="IntegrativeView" />


* '''Resolution Scaling / Subsampling:''' This is the most common and intuitive method. The peripheral regions are rendered into a smaller off-screen buffer (e.g., at half or quarter resolution) and then upscaled to fit the final display. This directly reduces the number of pixels that need to be processed and shaded.<ref name="GazeContingent2D">{{cite web |url=http://stanford.edu/class/ee367/Winter2017/mehra_sankar_ee367_win17_report.pdf |title=Gaze Contingent Foveated Rendering for 2D Displays}}</ref>
* '''Resolution Scaling / Subsampling:''' This is the most common and intuitive method. The peripheral regions are rendered into a smaller off-screen buffer (for example at half or quarter resolution) and then upscaled to fit the final display. This directly reduces the number of pixels that need to be processed and shaded.<ref name="GazeContingent2D">{{cite web |url=http://stanford.edu/class/ee367/Winter2017/mehra_sankar_ee367_win17_report.pdf |title=Gaze Contingent Foveated Rendering for 2D Displays}}</ref>
* '''Shading Rate Reduction:''' This method focuses on reducing the workload of the [[pixel shader]] (also known as a fragment shader). Instead of executing a complex shading program for every single pixel in the periphery, a single shader result can be applied to a block of multiple pixels. This is the core mechanism behind [[Variable Rate Shading]] (VRS).<ref name="TOYF_Paper" /><ref name="AutoVRSE">{{cite web |url=https://www.autovrse.com/foveated-rendering |title=What is Foveated Rendering? - autovrse}}</ref>
* '''Shading Rate Reduction:''' This method focuses on reducing the workload of the [[pixel shader]] (also known as a fragment shader). Instead of executing a complex shading program for every single pixel in the periphery, a single shader result can be applied to a block of multiple pixels. This is the core mechanism behind [[Variable Rate Shading]] (VRS).<ref name="TOYF_Paper" /><ref name="AutoVRSE">{{cite web |url=https://www.autovrse.com/foveated-rendering |title=What is Foveated Rendering? - autovrse}}</ref>
* '''Geometric Simplification:''' The geometric complexity of the scene can be reduced in the periphery. This involves using lower-polygon [[level of detail]] models for objects that are outside the user's direct gaze.
* '''Geometric Simplification:''' The geometric complexity of the scene can be reduced in the periphery. This involves using lower-polygon [[level of detail]] models for objects that are outside the user's direct gaze.
Line 78: Line 74:
==== Variable Rate Shading (VRS) ====
==== Variable Rate Shading (VRS) ====


[[Variable Rate Shading]] (VRS) is a hardware feature available on modern GPUs (e.g., [[NVIDIA]] Turing architecture and newer, [[AMD]] RDNA 2 and newer, [[Intel]] Gen11+) that provides fine-grained control over the pixel shading rate.<ref name="TOYF_Paper" /><ref name="OpenXRToolkit">{{cite web |url=https://mbucchia.github.io/OpenXR-Toolkit/fr.html |title=Foveated Rendering - OpenXR Toolkit}}</ref><ref name="microsoft2019">{{cite web |url=https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/ |title=Variable Rate Shading: a scalpel in a world of sledgehammers |publisher=Microsoft DirectX Blog |year=2019}}</ref> It allows a single pixel shader operation to compute the color for a block of pixels, such as a 2x2 or 4x4 block, instead of just a single pixel.<ref name="VarjoAPI">{{cite web |url=https://developer.varjo.com/docs/native/foveated-rendering-api |title=Foveated Rendering - Varjo for Developers}}</ref><ref name="PicoUnrealOpenXR">{{cite web |url=https://developer.picoxr.com/document/unreal-openxr/fixed-foveated-rendering/ |title=Foveated rendering - PICO Unreal OpenXR Plugin}}</ref> The technique supports shading rates from 1×1 (full quality) to 4×4 (coarse, one shade per 16 pixels).
[[Variable Rate Shading]] (VRS) is a hardware feature available on modern GPUs (for example [[NVIDIA]] Turing architecture and newer, [[AMD]] RDNA 2 and newer, [[Intel]] Gen11+) that provides fine-grained control over the pixel shading rate.<ref name="TOYF_Paper" /><ref name="OpenXRToolkit">{{cite web |url=https://mbucchia.github.io/OpenXR-Toolkit/fr.html |title=Foveated Rendering - OpenXR Toolkit}}</ref><ref name="microsoft2019">{{cite web |url=https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/ |title=Variable Rate Shading: a scalpel in a world of sledgehammers |publisher=Microsoft DirectX Blog |year=2019}}</ref> It allows a single pixel shader operation to compute the color for a block of pixels, such as a 2x2 or 4x4 block, instead of just a single pixel.<ref name="VarjoAPI">{{cite web |url=https://developer.varjo.com/docs/native/foveated-rendering-api |title=Foveated Rendering - Varjo for Developers}}</ref><ref name="PicoUnrealOpenXR">{{cite web |url=https://developer.picoxr.com/document/unreal-openxr/fixed-foveated-rendering/ |title=Foveated rendering - PICO Unreal OpenXR Plugin}}</ref> The technique supports shading rates from 1×1 (full quality) to 4×4 (coarse, one shade per 16 pixels).


==== Multi-View Rendering & Quad Views ====
==== Multi-View Rendering & Quad Views ====
Line 100: Line 96:
=== Fixed Foveated Rendering (FFR) ===
=== Fixed Foveated Rendering (FFR) ===


Fixed Foveated Rendering is the most basic implementation of the concept. It operates without any eye-tracking hardware and instead relies on the assumption that a user will predominantly look towards the center of the screen.<ref name="WikipediaFR" /><ref name="JigSpace">{{cite web |url=https://www.jig.com/spatial-computing/foveated-rendering |title=What Is Foveated Rendering? - JigSpace}}</ref> Consequently, FFR systems render a static, high-resolution region in the center of each eye's display, while the quality degrades in fixed concentric rings towards the edges.<ref name="MetaFFRvsETFR">{{cite web |url=https://developers.meta.com/horizon/blog/save-gpu-with-eye-tracked-foveated-rendering/ |title=Save GPU with Eye Tracked Foveated Rendering}}</ref>
[[Fixed Foveated Rendering]] is the most basic implementation of the concept. It operates without any eye-tracking hardware and instead relies on the assumption that a user will predominantly look towards the center of the screen.<ref name="WikipediaFR" /><ref name="JigSpace">{{cite web |url=https://www.jig.com/spatial-computing/foveated-rendering |title=What Is Foveated Rendering? - JigSpace}}</ref> Consequently, FFR systems render a static, high-resolution region in the center of each eye's display, while the quality degrades in fixed concentric rings towards the edges.<ref name="MetaFFRvsETFR">{{cite web |url=https://developers.meta.com/horizon/blog/save-gpu-with-eye-tracked-foveated-rendering/ |title=Save GPU with Eye Tracked Foveated Rendering}}</ref>


'''Advantages:'''
'''Advantages:'''
Line 112: Line 108:
=== Dynamic (Eye-Tracked) Foveated Rendering (ETFR / DFR) ===
=== Dynamic (Eye-Tracked) Foveated Rendering (ETFR / DFR) ===


Dynamic Foveated Rendering represents the full realization of the concept. It requires a [[head-mounted display]] with integrated eye-tracking cameras to determine the user's precise point of gaze in real-time.<ref name="VarjoWhatIs" /><ref name="WikipediaFR" /> The high-resolution foveal region is then dynamically moved to match this gaze point on a frame-by-frame basis, ensuring that the user is always looking at a fully rendered part of the scene.<ref name="TobiiDFR">{{cite web |url=https://www.tobii.com/resource-center/reports-and-papers/eye-tracking-and-dynamic-foveated-rendering |title=Eye tracking and dynamic foveated rendering - Tobii}}</ref>
[[Dynamic Foveated Rendering]] represents the full realization of the concept. It requires a [[head-mounted display]] with integrated eye-tracking cameras to determine the user's precise point of gaze in real-time.<ref name="VarjoWhatIs" /><ref name="WikipediaFR" /> The high-resolution foveal region is then dynamically moved to match this gaze point on a frame-by-frame basis, ensuring that the user is always looking at a fully rendered part of the scene.<ref name="TobiiDFR">{{cite web |url=https://www.tobii.com/resource-center/reports-and-papers/eye-tracking-and-dynamic-foveated-rendering |title=Eye tracking and dynamic foveated rendering - Tobii}}</ref>


'''Advantages:'''
'''Advantages:'''
* '''Maximum Performance:''' ETFR allows for much more aggressive foveation—a smaller foveal region and a more significant quality reduction in the periphery—resulting in substantially greater performance and power savings.<ref name="VRX_FR_Types" /><ref name="PicoUnrealOpenXR" />
* '''Maximum Performance:''' ETFR allows for much more aggressive foveation, a smaller foveal region and a more significant quality reduction in the periphery, resulting in substantially greater performance and power savings.<ref name="VRX_FR_Types" /><ref name="PicoUnrealOpenXR" />
* '''Perceptually Seamless:''' When implemented with low latency, the effect is imperceptible to the user.<ref name="VarjoWhatIs" />
* '''Perceptually Seamless:''' When implemented with low latency, the effect is imperceptible to the user.<ref name="VarjoWhatIs" />


Line 139: Line 135:
|-
|-
! Performance Savings
! Performance Savings
| Moderate. Reduces GPU load by rendering fewer pixels in the periphery (e.g., 26-43% savings reported for Meta Quest).<ref name="QuestProPerformance">{{cite web |url=https://www.uploadvr.com/quest-pro-foveated-rendering-performance/ |title=Quest Pro Foveated Rendering GPU Savings Detailed}}</ref> | Significant. Allows for more aggressive degradation, leading to greater GPU savings (e.g., 33-52% savings reported for Meta Quest Pro).<ref name="VRX_FR_Types" /><ref name="QuestProPerformance" />
| Moderate. Reduces GPU load by rendering fewer pixels in the periphery (for example 26-43% savings reported for Meta Quest).<ref name="QuestProPerformance">{{cite web |url=https://www.uploadvr.com/quest-pro-foveated-rendering-performance/ |title=Quest Pro Foveated Rendering GPU Savings Detailed}}</ref> | Significant. Allows for more aggressive degradation, leading to greater GPU savings (for example 33-52% savings reported for Meta Quest Pro).<ref name="VRX_FR_Types" /><ref name="QuestProPerformance" />
|-
|-
! Ideal Use Cases
! Ideal Use Cases
Line 152: Line 148:
As the technology matures, research is exploring more advanced forms of foveation that incorporate predictive and cognitive models.
As the technology matures, research is exploring more advanced forms of foveation that incorporate predictive and cognitive models.


* '''Predictive Foveation:''' Some systems attempt to predict the landing point of a saccade based on its initial trajectory and velocity. This allows the rendering system to begin shifting the foveal region to the target destination before the eye movement is complete.<ref name="FoveatedRenderingExplainedReddit" /><ref name="VRX_FR_Types" />
* '''[[Predictive Foveation]]:''' Some systems attempt to predict the landing point of a saccade based on its initial trajectory and velocity. This allows the rendering system to begin shifting the foveal region to the target destination before the eye movement is complete.<ref name="FoveatedRenderingExplainedReddit" /><ref name="VRX_FR_Types" />
* '''Attention-Aware Foveation:''' This is a cutting-edge research area that aims to model the user's cognitive state of attention. Peripheral visual sensitivity decreases when foveal attention is high.<ref name="AttentionAware" /><ref name="AttentionAwareSIGGRAPH" />
* '''[[Attention-Aware Foveation]]:''' This is a cutting-edge research area that aims to model the user's cognitive state of attention. Peripheral visual sensitivity decreases when foveal attention is high.<ref name="AttentionAware" /><ref name="AttentionAwareSIGGRAPH" />


== Performance, Efficacy, and Benchmarks ==
== Performance, Efficacy, and Benchmarks ==
Line 178: Line 174:
| [[Meta Quest Pro]] || Default res || 100% || 26-43% || 33-52% || ETFR provides 7-9% additional benefit<ref name="uploadvr2022">{{cite web |url=https://www.uploadvr.com/quest-pro-foveated-rendering-performance/ |title=Here's The Exact Performance Benefit Of Foveated Rendering On Quest Pro |publisher=UploadVR |date=October 2022}}</ref>
| [[Meta Quest Pro]] || Default res || 100% || 26-43% || 33-52% || ETFR provides 7-9% additional benefit<ref name="uploadvr2022">{{cite web |url=https://www.uploadvr.com/quest-pro-foveated-rendering-performance/ |title=Here's The Exact Performance Benefit Of Foveated Rendering On Quest Pro |publisher=UploadVR |date=October 2022}}</ref>
|-
|-
| Meta Quest Pro || Red Matter 2 || Default density || N/A || +33% pixels || 77% more total pixels in optical center<ref name="MetaETFR_Blog" />
| [[Meta Quest Pro]] || Red Matter 2 || Default density || N/A || +33% pixels || 77% more total pixels in optical center<ref name="MetaETFR_Blog" />
|-
|-
| [[PlayStation VR2]] || Unity Demo || 33.2ms || 14.3ms (2.5×) || 9.2ms (3.6×) || Eye tracking provides dramatic improvement<ref name="unity2023">{{cite web |url=https://www.uploadvr.com/psvr-2-eye-tracking-foveated-rendering-gdc/ |title=PSVR 2 Foveated Rendering Provides 3.6x Faster Performance - Unity |publisher=UploadVR |date=March 2023}}</ref>
| [[PlayStation VR2]] || Unity Demo || 33.2ms || 14.3ms (2.5×) || 9.2ms (3.6×) || Eye tracking provides dramatic improvement<ref name="unity2023">{{cite web |url=https://www.uploadvr.com/psvr-2-eye-tracking-foveated-rendering-gdc/ |title=PSVR 2 Foveated Rendering Provides 3.6x Faster Performance - Unity |publisher=UploadVR |date=March 2023}}</ref>
Line 186: Line 182:
| [[Pimax Crystal]] || VRS Method || 100% || N/A || 10-40% || 120Hz Tobii eye tracking<ref name="PimaxDFR" />
| [[Pimax Crystal]] || VRS Method || 100% || N/A || 10-40% || 120Hz Tobii eye tracking<ref name="PimaxDFR" />
|-
|-
| Pimax Crystal || Quad Views || 100% || N/A || 50-100% || More aggressive peripheral reduction<ref name="PimaxDFR" />
| [[Pimax Crystal]] || Quad Views || 100% || N/A || 50-100% || More aggressive peripheral reduction<ref name="PimaxDFR" />
|-
|-
| ARM Mali GPU || CircuitVR || 488M cycles || 397M cycles || N/A || 18.6% cycle reduction<ref name="arm2020" />
| [[ARM Mali GPU]] || CircuitVR || 488M cycles || 397M cycles || N/A || 18.6% cycle reduction<ref name="arm2020" />
|-
|-
| NVIDIA GTX 1080 || Shadow Warrior 2 || 60 FPS || N/A || 78 FPS || 30% performance gain<ref name="nvidia2017" />
| [[NVIDIA GTX 1080]] || Shadow Warrior 2 || 60 FPS || N/A || 78 FPS || 30% performance gain<ref name="nvidia2017" />
|-
|-
| PowerGS || 3D Gaussian Splatting || 100% power || N/A || 37% power || 63% power reduction<ref name="vrsplatting2024">{{cite web |url=https://dl.acm.org/doi/10.1145/3728302 |title=VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points |publisher=ACM |year=2024}}</ref>
| [[PowerGS]] || 3D Gaussian Splatting || 100% power || N/A || 37% power || 63% power reduction<ref name="vrsplatting2024">{{cite web |url=https://dl.acm.org/doi/10.1145/3728302 |title=VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points |publisher=ACM |year=2024}}</ref>
|}
|}


Line 233: Line 229:
| 2019 (December) || SDK support || [[Meta Quest|Oculus Quest]] || Fixed Foveated Rendering exposed in SDK, marking first large-scale commercial deployment.<ref name="venturebeat2019">{{cite web |url=https://venturebeat.com/2019/12/22/oculus-quest-gets-dynamic-fixed-foveated-rendering/ |title=Oculus Quest gets dynamic fixed foveated rendering |publisher=VentureBeat |date=2019-12-22}}</ref>
| 2019 (December) || SDK support || [[Meta Quest|Oculus Quest]] || Fixed Foveated Rendering exposed in SDK, marking first large-scale commercial deployment.<ref name="venturebeat2019">{{cite web |url=https://venturebeat.com/2019/12/22/oculus-quest-gets-dynamic-fixed-foveated-rendering/ |title=Oculus Quest gets dynamic fixed foveated rendering |publisher=VentureBeat |date=2019-12-22}}</ref>
|-
|-
| 2020 || Neural reconstruction || Facebook Reality Labs || [[DeepFovea]] demonstrated AI-based foveated reconstruction with up to 10-14× pixel count reduction.<ref name="deepfovea2019">{{cite web |url=https://dl.acm.org/doi/10.1145/3306307.3328186 |title=DeepFovea: Neural Reconstruction for Foveated Rendering |publisher=ACM SIGGRAPH |year=2019}}</ref>
| 2020 || Neural reconstruction || [[Facebook Reality Labs]] || [[DeepFovea]] demonstrated AI-based foveated reconstruction with up to 10-14× pixel count reduction.<ref name="deepfovea2019">{{cite web |url=https://dl.acm.org/doi/10.1145/3306307.3328186 |title=DeepFovea: Neural Reconstruction for Foveated Rendering |publisher=ACM SIGGRAPH |year=2019}}</ref>
|-
|-
| 2021 || Chipset integration || Qualcomm XR2 || Built-in support for foveated rendering and eye tracking in standalone VR chipset.<ref name="WikipediaFR" />
| 2021 || Chipset integration || [[Qualcomm XR2]] || Built-in support for foveated rendering and eye tracking in standalone VR chipset.<ref name="WikipediaFR" />
|-
|-
| 2022 || Consumer ETFR || [[Meta Quest Pro]] || First mainstream standalone headset with Eye-Tracked Foveated Rendering, achieving 33-52% performance gains.<ref name="uploadvr2022" /><ref name="WikipediaFR" />
| 2022 || Consumer ETFR || [[Meta Quest Pro]] || First mainstream standalone headset with Eye-Tracked Foveated Rendering, achieving 33-52% performance gains.<ref name="uploadvr2022" /><ref name="WikipediaFR" />
Line 365: Line 361:
=== Hardware Limitations ===
=== Hardware Limitations ===


* '''Mobile vs Desktop Performance:''' Mobile GPU architectures see smaller benefits than console/desktop GPUs—Quest Pro achieves 33-45% savings while PSVR2 reaches 72%.<ref name="uploadvr2022" /><ref name="unity2023" />
* '''Mobile vs Desktop Performance:''' Mobile GPU architectures see smaller benefits than console/desktop GPUs, Quest Pro achieves 33-45% savings while PSVR2 reaches 72%.<ref name="uploadvr2022" /><ref name="unity2023" />
* '''Cost and Complexity:''' Eye-tracking hardware increases headset cost, weight, and power consumption.
* '''Cost and Complexity:''' Eye-tracking hardware increases headset cost, weight, and power consumption.
* '''Calibration Requirements:''' Individual calibration typically required for each user to map eye movements accurately.
* '''Calibration Requirements:''' Individual calibration typically required for each user to map eye movements accurately.