Jump to content

Occlusion culling

From VR & AR Wiki
Revision as of 21:40, 26 October 2025 by Xinreality (talk | contribs) (Created page with "{{Infobox technique | name = Occlusion Culling | image = | caption = | type = Rendering optimization | used_in = Virtual Reality (VR) and Augmented Reality (AR) | developer = | year = | website = }} '''Occlusion culling''' is a rendering optimization technique that prevents graphics systems from processing and drawing geometry hidden behind other objects in a scene. Unlike view frustum culling...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Template:Infobox technique

Occlusion culling is a rendering optimization technique that prevents graphics systems from processing and drawing geometry hidden behind other objects in a scene. Unlike view frustum culling which only excludes objects outside the camera's field of view, occlusion culling identifies and skips objects within the frustum but obscured by foreground geometry.[1][2] This technique represents one of the most critical optimizations for virtual reality and augmented reality applications, where maintaining 90-120 frames per second is essential to prevent motion sickness and preserve immersion.[3]

Overview

Occlusion culling is a form of hidden surface determination (also known as hidden-surface removal) and is closely related to other culling methods like back-face culling and view-frustum culling.[4] In 3D computer graphics, if one object completely hides another from the camera's perspective, the hidden object is "culled" (not rendered) to save processing time. By default, game engines already perform frustum culling and back-face culling, but they might still issue draw calls for objects that are within the view frustum even if those objects are entirely behind other geometry. This causes unnecessary overdraw and wasted GPU work as hidden pixels get drawn over by nearer pixels.[5]

For VR/AR specifically, occlusion culling addresses the unique challenge of stereoscopic rendering, which requires rendering each scene twice—once for each eye—effectively doubling the rendering workload. John Carmack at Oculus established that VR requires below 20 milliseconds of motion-to-photons latency to feel imperceptible to humans, making every optimization critical.[6] Modern VR-specific innovations like Umbra Software's "Stereo Camera" feature perform a single occlusion query for a spherical volume encompassing both eyes, effectively halving the required processing time compared to traditional per-eye approaches.[7]

The technique operates by performing visibility tests before geometry enters the rendering pipeline, rather than processing all objects and discarding occluded fragments during rasterization. The optimal occlusion culling algorithm would select only visible objects for rendering, but practical implementations balance accuracy, performance overhead, and implementation complexity.[8] Modern approaches range from CPU-based precomputed visibility systems that trade memory for runtime performance, to fully GPU-driven pipelines using compute shaders and mesh shaders that eliminate CPU-GPU synchronization entirely.

Comparison with Other Culling Methods

To understand occlusion culling in context, it helps to compare it with other common culling techniques in 3D rendering:

Culling Technique Purpose When Applied Cost
Back-face culling Skips rendering of polygon faces that face away from the camera, under the assumption that they will be hidden behind the object itself During rasterization (hardware) Negligible (fixed-function GPU)
Frustum culling Skips rendering of objects entirely outside the camera's viewing frustum (the field of view), so that off-screen objects are not processed Before rendering (CPU/GPU) Very low (simple geometric tests)
Occlusion culling Skips rendering of objects *within* the camera's view that are completely occluded (blocked from sight) by nearer objects Before rendering (CPU/GPU) Low to medium (depends on technique)

By reducing the number of invisible objects being processed, occlusion culling can significantly boost rendering performance and lower resource usage, including on battery-powered AR/VR devices. Occlusion culling is therefore a vital optimization in VR to reduce the number of drawn objects and pixels per frame, helping meet the stringent performance targets (often 90 FPS or higher for VR headsets).[9]

History

Early Hidden Surface Removal (1960s-1970s)

The foundation of occlusion culling emerged from early hidden surface removal research in the late 1960s and 1970s. Edwin Catmull developed the Z-buffer algorithm at the University of Utah, creating the depth-buffer method that became foundational for all graphics hardware.[10] John Warnock introduced his recursive subdivision algorithm in 1969, while Ivan Sutherland, Robert Sproull, and Robert Schumacker published their influential 1974 survey characterizing ten hidden surface algorithms, establishing the theoretical framework that would guide decades of research.[11]

Early algorithms, such as the Painter's algorithm, sorted polygons by depth and rendered them from back to front, but suffered from artifacts with intersecting surfaces. The 1980s saw advancements with the introduction of Z-buffering (or depth buffering), patented by Wolfgang Straßer in 1974 but popularized in the 1980s through hardware implementations. This pixel-level technique became the de facto standard due to its simplicity and hardware acceleration.

Hierarchical Approaches (1990s)

The modern era of occlusion culling began with Ned Greene, Michael Kass, and Gavin Miller's seminal 1993 SIGGRAPH paper "Hierarchical Z-buffer Visibility."[12] This groundbreaking work introduced the hierarchical visibility algorithm using octrees for scene organization and the Z-pyramid concept—a hierarchical image pyramid of depth values enabling efficient culling of occluded regions. Their technique generated roughly one hundred times fewer depth comparisons than conventional Z-buffering, establishing hierarchical approaches as the path forward.

Parallel developments included Hansong Zhang's 1997 Hierarchical Occlusion Maps (HOM) algorithm, which extended depth-buffer methods for systems without hardware Z-pyramid support by using opacity thresholds for approximate visibility culling.[13]

Concurrently, object-space methods like Binary Space Partitioning (BSP trees) were developed for scene subdivision, aiding visibility computations in games and simulations. The late 1990s saw widespread adoption of portal-based culling and Potentially Visible Sets (PVS) for indoor environments, where scenes could be divided into cells and rooms with pre-computed visibility relationships.[14] These precomputation-based methods were famously used in early 3D games like Quake (1996): the level is divided into volumes (leaf nodes of a BSP tree), and for each such region a list of potentially visible regions is stored.[15]

Hardware Acceleration Era (2000s)

The early 2000s marked the transition from software to hardware acceleration. NVIDIA's GeForce3 GPU in 2001 introduced the NV_occlusion_query extension, providing the first widespread hardware support for occlusion queries.[16] This was formalized into the OpenGL ARB_occlusion_query standard and DirectX 9's IDirect3DQuery9 interface in 2002, making occlusion culling accessible to mainstream game developers. NVIDIA's GPU Gems publications in 2004-2005 provided practical implementation guides, with Chapter 29 by Dean Sekulic of Croteam and Chapter 6 by Michael Wimmer and Jiří Bittner establishing best practices for hardware occlusion queries that remain relevant today.[17][18]

Middleware and VR Specialization (2010s)

The middleware era began in 2007 with the founding of Umbra Software in Helsinki, which commercialized advanced occlusion culling technology used in major titles including Destiny, The Witcher 3, and the Mass Effect series.[19] Unity Technologies integrated Umbra into Unity 3 in 2010, bringing occlusion culling to millions of developers. At GDC 2014, Umbra announced Umbra VR with the innovative Stereo Camera feature specifically designed for virtual reality, demonstrating the industry's recognition that VR demanded specialized optimization approaches.[20] Amazon acquired Umbra in 2021, underscoring the strategic importance of visibility optimization technology.[21]

In the 2010s, as real-time graphics gained traction with VR devices like the Oculus Rift and HTC Vive, the need for efficient culling was amplified due to stereoscopic rendering demands. Similarly, AR platforms like Microsoft HoloLens introduced challenges with real-world integration, spurring hybrid software-hardware solutions.[22]

Modern GPU-Driven Pipelines (2015-Present)

Recent developments since 2015 have focused on GPU-driven rendering pipelines. Jon Hasselgren, Magnus Andersson, and Tomas Akenine-Möller's 2016 "Masked Software Occlusion Culling" paper introduced SIMD-optimized software rasterization achieving high performance on CPUs.[23] The emergence of mesh shaders in NVIDIA's Turing architecture (2018) and AMD's RDNA2 enabled per-meshlet culling at unprecedented granularity, with implementations in Unreal Engine 5's Nanite and Alan Wake II showing 40-48% performance improvements.[24]

Modern engines now employ two-phase hierarchical depth buffer (HiZ) culling: rendering objects visible in the previous frame, building a depth pyramid, then testing newly visible objects—eliminating CPU-GPU synchronization while maintaining efficiency.[25] By the 2020s, mobile VR/AR devices (e.g., Meta Quest series) necessitated custom, lightweight implementations, blending traditional methods with AI-accelerated depth estimation for dynamic scenes.[26]

Principles and Techniques

Occlusion culling operates on the principle that not all scene geometry contributes to the final image, as closer opaque objects can fully obscure distant ones. It is distinct from but complementary to other culling methods. Techniques are broadly classified as image-space (pixel-level, e.g., Z-buffering) or object-space (geometry-level, e.g., PVS), with hybrids common in practice.[27]

Algorithms for occlusion culling can be categorized based on when the visibility calculations are performed: during a pre-processing step or on-the-fly at runtime.[28]

  • Pre-processing (or Baked) Culling: Visibility information is calculated offline, before the application is run. This data is then stored and queried quickly at runtime. This approach is very fast during gameplay but is generally limited to static geometry, as the visibility data becomes invalid if occluders move.
  • Runtime (or Dynamic) Culling: Visibility is determined "on-the-fly" for every frame. This is more computationally intensive at runtime but has the significant advantage of supporting fully dynamic scenes where both occluders and occludees can move freely.

Hardware Occlusion Queries

Hardware occlusion queries leverage the GPU's depth testing capabilities to determine object visibility. The technique issues visibility checks directly to the GPU, rendering bounding volumes with color and depth writes disabled, then retrieving the count of fragments that passed the depth test.[29] If the query returns zero visible pixels, the enclosed object is occluded and can be skipped. This approach became standard after GPU support emerged in 2001 and remains the default in Unreal Engine for dynamic scenes.[30]

The fundamental challenge is pipeline latency—queries travel from CPU to GPU queue to rasterization and back, typically requiring 1-3 frames for results. Naïve implementations that wait for query results cause CPU stalls that starve the GPU, negating any performance benefit. The solution is the coherent hierarchical culling algorithm developed by Bittner and Wimmer, which exploits temporal coherence by assuming objects visible in the previous frame remain visible, rendering them immediately while asynchronously querying previously occluded objects organized in a spatial hierarchy like an octree or k-d tree.[31]

To avoid testing every object individually (which could itself be expensive if there are thousands of objects), engines commonly organize the scene graph or space partitioning structure in a hierarchy. Occlusion culling can then operate on groups of objects: if an entire group (node) is found to be occluded, all of its children (sub-objects) can be skipped without further checks. This hierarchical approach makes occlusion tests much more scalable to large scenes by quickly discarding large unseen sections.[32]

These occlusion queries (sometimes called "Hierarchical Z-buffer" or "Z-culling" when done at a coarse level) allow dynamic, on-the-fly culling of arbitrary objects without precomputed data. To mitigate latency, engines often use techniques like temporal reprojection or multi-frame queries (e.g., issuing queries for many objects and using last frame's results to decide what to draw in the current frame, sometimes known as "round-robin" occlusion culling). This reduces stalls by giving the GPU more time to produce query results in parallel. Unreal Engine, for example, can use asynchronous occlusion queries and even has a "Round Robin Occlusion" mode optimized for VR to distribute query workload across frames.[33]

Hierarchical Z-Buffering

Hierarchical Z-buffering (HZB or Hi-Z) builds a mipmap chain of the depth buffer where each level stores the maximum (furthest) depth of its parent texels. Testing object visibility requires projecting the bounding box to screen space, determining which mipmap level covers that projection, and comparing the object's nearest depth against the HiZ buffer value.[34] If the object is further than all stored depths, it's completely occluded. Conservative testing at coarser levels provides fast rejection, with recursive refinement at finer levels for uncertain cases.

The original 1993 technique required custom hardware, but modern GPU implementations leverage existing capabilities. The Splinter Cell: Conviction approach renders occluders to a 512×256 depth-only target, generates the HiZ pyramid using compute shaders or quad passes, then tests objects in vertex shaders by sampling four corners of their screen-space bounding box at the appropriate mip level.[35] This bounded-cost approach uses a fixed number of samples per object regardless of size, providing same-frame results with no latency.

The main challenge with HZB is that it requires a depth buffer to be generated first. Rendering the entire scene just to create the depth buffer would defeat the purpose of culling. This leads to two common solutions:

  1. Depth Pre-pass: A small set of large, artist-selected occluders are rendered first in a simplified pass that only writes to the depth buffer. The HZB is then built from this conservative depth buffer.
  2. Depth Buffer Reprojection: The depth buffer from the previous frame is used to perform culling on the current frame. This works well when camera movement is smooth but can cause artifacts like object "popping" with rapid motion.

Two-Pass HZB Occlusion

This modern approach combines the benefits of the above solutions without requiring manual authoring of occluders.[36] It works in two main passes:

  1. First Pass: The engine renders only the objects that were determined to be visible in the previous frame. This is a small subset of the total scene and quickly generates a conservative, but mostly accurate, depth buffer for the current frame. An HZB is then built from this depth buffer.
  2. Second Pass: The engine then tests all the other objects in the scene (those that were not visible last frame) against the newly created HZB. Any of these objects found to be visible are then rendered, and the depth buffer is updated.

This method avoids the need for artists to manually select occluders and works well in fully dynamic scenes, with its main limitation being potential artifacts during very rapid camera movements or scene changes.[37]

Software Occlusion Culling

Software occlusion culling implements a complete depth buffer rasterizer on the CPU, providing immediate visibility results without GPU latency. Frostbite's implementation for Battlefield parallelizes the work across multiple cores: triangle setup jobs clip occluder meshes against the frustum and append triangles to a shared array, rasterization jobs process triangle batches into private low-resolution Z-buffers, and a merge phase combines results using MAX operations per pixel.[38]

Some occlusion culling systems use a software renderer or depth buffer on the CPU to simulate the view and determine visibility. For example, a system might rasterize the scene's biggest occluders to a low-resolution depth buffer, then test each object's bounding box against this buffer to see if it is behind a filled depth pixel. This is essentially performing a custom visibility test in software without involving GPU queries (avoiding GPU pipeline stalls).[39]

The primary advantage is zero-frame latency—visibility determination happens entirely before rendering begins, preventing the pop-in artifacts common with asynchronous GPU queries. This makes it particularly valuable for VR where even single-frame delays create noticeable temporal artifacts.[40] The technique also frees GPU resources for rendering rather than visibility testing, crucial on bandwidth-constrained mobile VR platforms.

Masked Software Occlusion Culling

Masked Software Occlusion Culling is a highly optimized algorithm designed for modern multi-core CPUs with SIMD (Single Instruction, Multiple Data) capabilities.[41] Instead of creating a full-resolution depth buffer, it operates directly on a hierarchical depth buffer (similar to an HZB). It uses SIMD instructions to process and test multiple triangles or bounding boxes in parallel, achieving performance that can be competitive with GPU-based approaches but with lower latency and greater predictability.[42]

Software occlusion is particularly valuable on platforms with diverse or limited GPU capabilities, such as mobile devices. It provides a consistent and predictable performance profile, avoiding issues related to GPU driver variations. On mobile, software occlusion often has a lower latency (typically one frame) compared to hardware queries (which can be two or more frames), resulting in fewer visual artifacts.[43]

Portal Culling and Potentially Visible Sets

Portal-based occlusion culling subdivides worlds into convex cells (rooms) connected by portals (doorways, windows), progressively clipping the view frustum as it passes through portal chains.[44] In scenes made of distinct rooms or areas connected by narrow openings (doors, windows, tunnels, etc.), a portal system can be used to cull unseen areas. The 3D world is partitioned into cells (e.g. rooms), linked by portals which represent the openings between cells.[45]

The rendering engine performs frustum culling on a per-cell basis: it first renders the current cell, then for any portal in view, it calculates a reduced viewing frustum through that portal and determines which adjacent cells are visible, and so on. Any objects in cells that are not reachable via visible portals are not drawn at all. Each portal creates additional frustum planes that constrain visibility into the next cell, with recursion continuing until the frustum becomes too small or reaches maximum depth.[46]

Portal occlusion is very effective for indoor environments where large solid walls naturally occlude entire sections of a level. It is less effective in wide-open scenes (where everything is potentially visible). Many game engines (including Unity and Unreal Engine) support portal culling via "occlusion portal" or "room and portal" tools in their level design systems.

Potentially Visible Sets (PVS) take this concept further by precomputing during offline processing which geometry is potentially visible from each spatial region.[47] For static scenes (where neither the environment nor the camera paths change significantly), developers can precompute which areas or objects are visible from each region of space. This results in a Potentially Visible Set for each region or cell of the world. At runtime, the engine simply looks up which objects to draw for the player's current location, greatly reducing overdraw.[48]

The PVS approach requires the world geometry to be static (unchanged) and can take significant preprocessing time and memory, but it yields very fast lookups at runtime since visibility is determined in advance. At runtime, the system performs an O(1) lookup into the PVS for the current region and then only needs to render the objects in that cell's pre-computed PVS.

Portal Culling vs. Potentially Visible Sets
Feature Portal Culling Potentially Visible Sets
Preprocessing Minimal (portal definition) Extensive (hours to days)
Runtime Cost Medium (recursive frustum clipping) Very Low (lookup + frustum)
Memory Overhead Low High (1-50 MB typical)
Dynamic Geometry Supports dynamic portals Static geometry only
Best Use Case Indoor environments Mobile VR with static content

PVS algorithms can be classified by their accuracy:

  • Conservative: Overestimates visibility, ensuring no visible object is ever culled, but may render some occluded objects. This prevents visual artifacts at the cost of some performance.
  • Aggressive: Underestimates visibility, ensuring no occluded object is rendered, but may accidentally cull a visible object, causing artifacts.
  • Exact: Perfectly calculates visibility with no errors or redundancy, but is computationally very expensive and complex to implement.

Hierarchical Occlusion Maps

Hierarchical Occlusion Maps (HOM), introduced by Hansong Zhang in 1997, separate visibility testing into a 2D overlap test and 1D depth test. The system renders occluders as white-on-black images without shading, builds an opacity pyramid through averaging, and tests objects by checking opacity values at appropriate hierarchy levels.[49] An opacity threshold provides controllable approximate culling—accepting slight inaccuracies for performance.

The depth component uses a coarse grid (64×64 typical) storing the furthest Z-value per region estimated from occluder bounding box vertices. This two-stage approach reduces memory compared to full-resolution depth buffers while enabling automatic occluder fusion through the opacity pyramid. The technique's advantage over pure HZB is flexibility in accuracy—developers tune the threshold to balance culling aggressiveness against potential false positives. Techniques like Hierarchical Occlusion Maps use multi-resolution buffers to quickly cull objects by testing against progressively lower-detail occluder representations.[50]

Modern GPU-Driven Approaches

The cutting edge of occlusion culling as of 2024-2025 centers on fully GPU-driven rendering pipelines that eliminate CPU involvement entirely. Mesh shaders, available since NVIDIA Turing (2018) and AMD RDNA2, enable programmable geometry processing with per-meshlet culling granularity.[51] Meshlets—small clusters of 32-256 triangles—undergo individual frustum, backface cone, and occlusion tests on the GPU before rasterization.

Two-phase HiZ culling represents the industry standard modern technique. Phase one renders objects visible in the previous frame, exploiting temporal coherence. Phase two builds a hierarchical depth buffer using compute shaders, tests newly visible objects against this HiZ, and updates visibility bits for the next frame.[52] This approach provides one-frame latency without CPU-GPU synchronization, maintaining efficiency while handling dynamic scenes. Unity 6's GPU Resident Drawer and Unreal Engine 5's Nanite exemplify this architecture, using current and previous frame depth textures to avoid missing newly visible objects.[53]

In practice, game engines often use a combination of techniques. For example, an engine might use precomputed PVS or portals for static geometry, and rely on hardware occlusion queries or hierarchical tests for dynamic objects or for large occluders that move.

Comparative Analysis of Techniques

The choice of an occlusion culling algorithm involves significant trade-offs between pre-computation time, runtime performance, memory usage, and support for dynamic scenes.

Comparison of Occlusion Culling Algorithms
Algorithm Primary Method Dynamism Support Memory Cost Primary Load Ideal Scene Type
Potentially Visible Set (PVS) Pre-computed, Cell-based Low (Static Occluders Only) High CPU (Runtime Lookup) Static architectural models, indoor scenes with fixed geometry
Portal Culling Pre-computed, Portal-based Low (Static Portals) Medium CPU (Runtime Traversal) Indoor levels with distinct rooms and corridors, mazes
Hierarchical Z-Buffer (HZB) Dynamic, Image-space High (Fully Dynamic) Low GPU Complex, high-detail, dynamic scenes with many objects
Hardware Occlusion Queries Dynamic, Object-space High (Fully Dynamic) Very Low GPU & CPU Large, dynamic scenes with a mix of large and small occluders
Software Occlusion Dynamic, CPU-based High (Fully Dynamic) Low CPU Mobile VR, platforms with limited GPU or latency concerns

Occlusion Culling in Virtual Reality (VR)

Maintaining a high frame rate is especially critical in virtual reality applications to ensure user comfort and avoid simulator sickness. VR rendering is also more demanding because the scene typically must be rendered twice (once per eye for stereoscopic vision), effectively doubling the rendering workload. Occlusion culling is therefore a vital optimization in VR to reduce the number of drawn objects and pixels per frame, helping meet the stringent performance targets (often 90 FPS or higher for VR headsets).[54] By not rendering hidden objects, the GPU has fewer draw calls and pixel operations to perform, and the CPU spends less time issuing draw calls or performing culling on a per-object basis.

Stereoscopic Challenges

VR presents some unique challenges for occlusion culling. One issue is that with stereoscopic rendering, an object that is occluded in one eye's view might still be visible in the other eye's view (due to the slightly different camera positions). This means occlusion culling in VR needs to consider both eye cameras. Engines often perform occlusion culling separately for each eye's frustum.

Traditional occlusion culling systems issue separate visibility queries for the left and right eye frustums, duplicating the culling overhead. Umbra's Stereo Camera innovation addresses this by performing a single occlusion query for a spherical volume encompassing both eyes, positioned exactly between them.[55] This guarantees correct results for both viewpoints while halving the required processing time compared to per-eye queries.

Because of the need to keep frames in sync and avoid latency, some VR engines use conservative approaches (ensuring an object is definitely occluded in both eyes before culling it) or time-shifted approaches (using last frame's data, as mentioned with round-robin queries) to maintain smooth performance.

Latency Requirements

The stringent latency requirements compound these challenges. John Carmack established that VR needs below 20 milliseconds motion-to-photons latency to avoid perceptible lag.[56] With rendering limited to approximately 3 milliseconds on the CPU thread before vertical sync on console VR, visibility culling becomes a critical bottleneck. Hardware occlusion queries with their 1-3 frame delay cause unacceptable pop-in artifacts—objects suddenly appearing or disappearing, which proves dramatically more noticeable in VR than flat-screen gaming and particularly jarring if occurring in only one eye.

Another consideration is the cost of culling itself. Occlusion culling computations (whether running queries on GPU or doing software tests on CPU) take time, and in VR the time budget per frame is very low. If the scene is simple, the overhead of occlusion culling might outweigh its benefits. Therefore, developers must profile VR scenes to ensure that enabling occlusion culling is actually yielding a net gain. In many cases, VR titles with complex environments do see major performance gains from occlusion culling.

Mobile VR Optimization

Mobile VR platforms like Meta Quest compound these challenges with limited processing power, thermal constraints, and battery life considerations. The development of the game Republique VR for Oculus Go and Gear VR provides an insightful case study.[57]

Pre-calculating PVS for Fixed Cameras

The team found that Unity's general-purpose occlusion culling system was not performant enough for their needs on mobile VR. Since the game primarily used fixed camera locations, they developed a custom PVS system. For each camera point, they performed an offline bake by rendering a 360-degree cubemap of the scene. In this special render, every object was assigned a unique solid color. By reading the colors present in the final cubemap, they could build a highly accurate PVS containing only the objects that were truly visible from that specific point.[58]

Handling Transparency

A significant problem with their initial approach was that transparent objects, like glass windows, were rendered with a solid color and were incorrectly occluding objects behind them. To solve this, they implemented an iterative baking process:

  1. Render the scene with all objects, including transparent ones.
  2. Add all visible objects to the PVS.
  3. Identify which of the visible objects were transparent, disable them, and remove them from a master list of transparent objects.
  4. Re-render the scene. Any newly visible objects must have been occluded only by the transparent objects that were just disabled. Add these new objects to the PVS.
  5. Repeat this process until a render pass reveals no new transparent objects.

This ensures that transparent objects act as occludees (can be hidden) but never as occluders.[59]

Extending the System for Moving Cameras

For parts of the game with limited camera movement (e.g., along a fixed rail), the system was extended. Instead of baking a full PVS at many points along the path, which would be memory-intensive, they baked a PVS only at key points.[60] For the space between two key points, they stored only the difference—a small list of objects to enable or disable when transitioning from one PVS to the next. This "difference list" approach dramatically reduced the memory footprint and the computational cost of updating visibility as the camera moved.

This solution combined portal-style room-based culling with the Dead Secret method: baking potentially visible sets by rendering 385 colorized cubemaps at 6×512×512 resolution, storing hand-authored visibility lists per camera position, and maintaining difference lists between adjacent cells for moving cameras. This achieved a 95% draw call reduction (1,400 to 60), enabling a AAA-quality experience on mobile hardware.[61]

In Unity's profiler, they observed the CPU time spent on culling drop significantly, and the GPU time waiting on drawing also dropped, indicating the occlusion culling was effectively reducing rendering workload.[62]

VR-Specific Engine Features

VR engines and hardware platforms continue to introduce specialized methods to improve occlusion culling. Unreal Engine's Round Robin Occlusion technique specifically targets VR by alternating occlusion queries between the left and right eyes across frames rather than querying both each frame.[63] Combined with the Super-Frustum culling that merges both eye frustums to save approximately 50% of culling time, these VR-specific optimizations acknowledge the unique constraints.

Some VR engines also allow simpler methods like field of view-based culling for objects very close to the periphery (assuming the VR user won't notice if they pop in a frame later), or rely on multi-resolution shading and foveated rendering techniques in conjunction with occlusion culling to minimize work where the user isn't looking. Overall, occlusion culling remains one of the standard optimizations recommended for VR development, alongside techniques like level-of-detail (LOD) and baking, to help maintain the high throughput required for immersive, jitter-free VR experiences.[64]

Limitations and Trade-offs in VR

The development of custom solutions like the one for Republique highlights the severe trade-offs that VR development can force:

  • Memory Consumption: While the custom PVS system provided excellent runtime performance, the stored visibility data consumed significant memory, a scarce resource on mobile VR devices.[65]
  • Incompatibility with Real-time Shadows: A major consequence of aggressively culling objects is that they cannot cast dynamic, real-time shadows, because the shadow rendering pass needs access to the object's geometry. This forced the team to rely on baked lighting and shadowmaps.[66]
  • Workflow Implications: Because the PVS data was pre-computed, any change to the level's static geometry, no matter how small, required a full, time-consuming rebake of the visibility data, which significantly slowed down the development and iteration process.[67]

Occlusion Culling in Augmented Reality (AR)

In augmented reality, the term "occlusion" often refers not only to performance optimization but also to the visual effect of real-world objects blocking virtual objects from view. For AR applications, achieving realistic occlusion is key to blending digital content into the user's real environment. If a virtual object is meant to be positioned behind a real-world object (like a character standing behind a real table), the AR system should hide (not render) the parts of the virtual object that should be obscured by the real object. Otherwise, the virtual content will unrealistically appear "ghost-like" by showing up through solid objects, breaking the immersion. Occlusion handling in AR therefore has two aspects: visual realism and performance.

Visual Realism Through Depth Sensing

Augmented reality presents an entirely different challenge: achieving realistic occlusion of virtual content behind real-world objects. This requires real-time three-dimensional sensing and reconstruction of the physical environment to generate occlusion geometry or depth maps. For visual realism, modern AR frameworks have introduced depth sensing and occlusion capabilities. For example, Apple's ARKit and Google's ARCore provide depth APIs that let the device understand the geometry of the real world to some extent.

Google's ARCore Depth API, released from beta in June 2020, democratized this capability by using monocular depth estimation—capturing multiple images as the device moves and calculating depth through motion parallax triangulation.[68] ARCore's Depth API produces a depth map of the environment using sensors and camera data (via techniques like structured light, time-of-flight, or stereo vision), which the AR application can use to allow real objects to occlude virtual objects.[69] This works on standard mobile cameras across 200+ million devices without requiring specialized Time-of-Flight sensors. In practice, AR developers can enable "environment occlusion" so that, when the device knows the distance to real surfaces (walls, furniture, people, etc.), it will not draw virtual content that is behind those surfaces relative to the camera. Google emphasizes that "occlusion – accurately rendering a virtual object behind real-world objects – is paramount to an immersive AR experience".[70]

Apple's ARKit provides a feature called people occlusion, which uses real-time person segmentation and depth data to allow people in the camera view to properly occlude AR objects. For instance, if a virtual creature runs behind a person in the view, ARKit can use the person's depth silhouette to hide the creature when it passes behind them. ARKit's documentation notes that a person will occlude a virtual object only when the person is closer to the camera than that object, ensuring correct depth ordering.[71] These depth-based occlusion techniques greatly increase realism, but they rely on hardware capabilities (like LiDAR scanners or dual cameras) or advanced computer vision, and they can be computationally expensive.

Microsoft HoloLens 2.0 employs a hybrid approach combining pre-scanned 3D mesh representations for indoor environments with real-time scanning using its onboard ToF camera.[72] The limitation is ToF range of approximately 5 meters, requiring pre-built CityGML LOD1 building models for outdoor augmented reality applications. Implementation uses custom depth buffer shaders that write to the Z-buffer without color output, rendering at queue priority Geometry-1 to establish occlusion before virtual content renders.

Performance Optimization in AR

From a performance standpoint, occlusion culling in AR has a slightly different focus. Many handheld AR applications involve relatively few virtual objects (compared to full VR or gaming scenes), so the classic performance gains of occlusion culling (reducing draw calls for hundreds of off-screen models) might be less dramatic. However, as AR use cases grow (for example, outdoor AR gaming or AR cloud experiences with many virtual elements), culling hidden objects is still important to keep frame rates high on mobile devices.

AR developers are encouraged to use the same optimizations as VR: level-of-detail, batching, and occlusion culling for virtual content that might be off-screen or behind other virtual objects. In fact, Unity's AR Foundation toolkit integrates occlusion culling just as in VR — for example, Unity will not render virtual objects that are outside the camera view or completely behind other virtual objects in the scene. This helps save processing power on phones or AR glasses.[73]

Challenges and Limitations

One unique challenge in AR occlusion culling is that the "occluders" can include real-world geometry which is not fully known to the system. Unlike a virtual environment where the engine has full knowledge of all objects, an AR system might only have a partial or sensed understanding of the real world (via a depth map or meshed representation). If the depth sensing is low resolution or has errors, the occlusion mask might have holes or jitter, causing virtual objects to peek through incorrectly.

The fundamental AR challenge remains depth sensor limitations: short range (typically 4-5 meters), outdoor sunlight interference with IR-based sensors, low resolution compared to RGB cameras causing jagged occlusion edges, and noise creating holes in depth data.[74] Software-based depth estimation addresses some constraints but requires significant processing power, impacting battery life and thermal management on mobile devices.

Developers often have to balance quality and performance: for instance, Meta's recent Depth API for the Meta Quest 3 mixed reality headset offers two modes – "hard occlusion" and "soft occlusion". Hard occlusion uses a coarse depth mask that is cheaper to compute but produces jagged edges in the composite, whereas soft occlusion smooths the mask for more realistic blending at the cost of extra GPU processing.[75] The Quest 3 can use the depth sensor data to occlude virtual objects with real world depth, bringing AR-like occlusion into a VR/MR headset experience. Initial reports indicate that even the softer occlusion had minimal performance impact on Quest 3, but developers are advised to profile their apps and enable or disable these features depending on the target device's capability.[76]

Techniques to gather environmental depth include structured light, time-of-flight cameras, and stereo camera vision, each with limitations in range, lighting, and resolution. When high-quality depth data is available (e.g., LiDAR on high-end devices), AR apps can pre-scan the environment and even use a generated mesh as an occluder for virtual content, achieving very convincing occlusion effects. Developers often combine depth-based occlusion with other tricks (like shader-based depth masks or manual placement of invisible occlusion geometry in known locations) to handle occlusion in specific scenarios.[77]

In summary, occlusion culling in AR serves both to avoid rendering unseen virtual content (improving performance) and to correctly hide virtual objects behind real objects (improving realism). As AR hardware advances, the fidelity of depth sensing and environmental understanding is improving, which makes occlusion more accurate. Nonetheless, it remains a challenging problem: as one AR developer noted, "the hardest challenge for creating an occlusion mask is reconstructing a good enough model of the real world" in real time. As AR continues to evolve toward mixed reality (MR) with devices like HoloLens and Meta Quest, the line between virtual and real occlusion blurs. A fully spatially aware device can occlusion-cull virtual objects against both virtual and real geometry seamlessly. Ultimately, solving occlusion in AR boosts both performance (by not rendering what isn't visible) and immersion (by making virtual content obey real-world physics of line-of-sight). Both aspects are essential for convincing and comfortable AR experiences.

Performance Impact

Measured Performance Gains

Occlusion culling delivers measurable performance improvements when properly applied, but the magnitude varies dramatically based on scene characteristics and implementation approach. Intel's comprehensive study using their software occlusion culling library on a castle scene with 1,628 occluders and 26,897 occludees demonstrated that combining multithreading, frustum culling, and occlusion culling achieved approximately 8x total performance gain compared to unoptimized rendering.[78] Crucially, frustum culling alone provided roughly 3x improvement, with occlusion adding multiplicative benefits.

Kitware's WebGPU implementation testing the massive 624-million-triangle Bistro scene showed that occlusion culling reduced visible meshes from 1,591 to 585, with frame time improving from 149ms to 131ms using occlusion alone.[79] Combining frustum and occlusion culling achieved 5-6x speedup compared to no culling on RTX A6000 hardware. Real-world VR scenarios demonstrate even more dramatic gains: Prographers reported 240% frame rate increases (30 to 72 FPS) on Meta Quest 2 for indoor multi-floor environments.[80]

Performance Improvements by Scene Type
Scene Type Typical Improvement Best Case Notes
Indoor/Dense Urban 3-8x Up to 10x High occlusion ratio
Outdoor/Open World 1.2-1.5x 2x Low occlusion ratio
Mobile VR Indoor 2-5x 8x Draw call reduction critical
Desktop VR 2-4x 6x Stereoscopic doubling effect
Simple Geometry (<1000 tri) 0.9-1.1x Negative Overhead exceeds benefit

GPU and CPU Overhead

GPU savings manifest through multiple mechanisms. Draw call reduction proves critical as each eliminated draw call saves CPU-side validation, state changes, and GPU command processing—Republique VR's reduction from 1,400 to 60 draw calls freed enormous overhead.[81] Overdraw reduction directly saves fragment shading work; in indoor levels with high depth complexity, 50-90% of GPU fragment processing can be eliminated. Early-Z rejection operates at the hardware level but only helps when objects render front-to-back, whereas occlusion culling prevents occluded objects from entering the pipeline entirely.

The CPU overhead varies by technique:

  • Baked occlusion systems incur minimal runtime cost of 0.1-0.5 milliseconds per frame for visibility lookups, though require memory for precomputed data (1-50 MB typical) and offline baking time ranging from seconds to hours.
  • Software occlusion culling consumes 1-2 milliseconds but eliminates GPU query overhead and provides zero-frame-latency results.[82]
  • Hardware occlusion queries impose minimal CPU cost when properly pipelined but introduce 1-3 frame latency.
  • GPU-driven compute-based culling represents the current optimal approach with negligible CPU overhead and sub-millisecond GPU cost.[83]

Comparison with Other Culling Techniques

Frustum culling removes objects outside the camera's field of view through simple geometric tests at very low cost—it should always run first and always be enabled. Occlusion culling then operates on the frustum-culled set, eliminating hidden objects within the view. The techniques multiply: approximately 3x from frustum times 3x from occlusion yields 8-9x combined as Intel demonstrated.[84]

Level of Detail (LOD) serves an entirely different purpose: reducing triangle counts for visible distant objects rather than eliminating objects entirely. LOD makes rendering cheaper; occlusion culling renders fewer things. The ideal pipeline runs frustum culling, occlusion culling, LOD selection based on distance or screen size, then backface culling during rasterization (hardware-accelerated, essentially free). Stacking these optimizations provides 10x or greater total gains.[85]

Backface culling operates differently still—a hardware-accelerated fixed-function GPU unit that eliminates triangles facing away from the camera during rasterization using simple dot product tests. This provides approximately 50% fragment shading reduction for closed opaque meshes at zero CPU cost. It runs automatically after all visibility culling and should remain enabled for solid geometry.

Implementation in Game Engines

Major game engines like Unity and Unreal Engine provide robust occlusion culling systems, but they embody fundamentally different design philosophies. Unity's system is primarily an explicit, opt-in, static-first framework, whereas Unreal Engine's is an implicit, always-on, dynamic-first framework.

Unity

Unity Technologies integrates occlusion culling through the Umbra library for CPU-based precomputed visibility. Engines like Unity and Unreal Engine provide built-in occlusion culling systems. Unity's occlusion culling (since Unity 3) is powered by the Umbra middleware, which builds an internal representation of the scene and computes occlusion data in a preprocess (baking) step.[86]

Setup: Static Occluders and Occludees

The workflow requires marking objects as Occluder Static and Occludee Static, configuring parameters in the Occlusion Culling window, then baking the occlusion data.[87] To use the system, developers must explicitly mark objects in their scene with static flags. An object that should hide other objects is marked as Occluder Static, and an object that can be hidden is marked as Occludee Static. An object, like a large wall, can be both. Dynamic (non-static) GameObjects can be occluded by static occluders, but they cannot themselves act as occluders in this system.[88]

The Baking Process and Data Generation

After tagging objects, the developer must initiate a "bake" process from the Occlusion Culling window. During this process, Unity uses a virtual camera to analyze the scene and build the PVS data. It divides the scene into a hierarchy of cells and computes the visibility from each cell to all others. This data is then stored and used at runtime. This bake must be re-run any time the static geometry of the level changes, which can impact iteration times during development.[89]

Key parameters include Smallest Occluder (minimum object size to act as occluder, typically 5-10 units) and Smallest Hole (minimum gap size to recognize, typically 0.25-1.0 units). The system divides the scene into cells during baking, generates visibility data between adjacent cells, and performs queries against this baked data per camera at runtime.

Occlusion Areas and Occlusion Portals

Unity provides additional components for more fine-grained control:

  • Occlusion Areas are volumes that can be placed in a scene to apply occlusion culling to dynamic objects. This is useful for confining culling calculations to specific regions where they will be most effective.[90]
  • Occlusion Portals are components that can act as dynamic occluders. They can be programmatically opened and closed via script, making them ideal for representing doors, windows, or destructible walls that can change the visibility within a scene at runtime.[91]

Visualizing and Debugging Occlusion Data

The Unity Editor includes a powerful visualization mode that allows developers to see the generated cell structure and view the scene from the camera's perspective, highlighting which objects are being culled. This is essential for debugging common artifacts, such as objects incorrectly popping into view, which may require adjusting the bake settings or level geometry.[92]

GPU Occlusion Culling

Unity 6 introduces GPU Occlusion Culling through the GPU Resident Drawer architecture, available in HDRP and URP. This modern system generates depth textures from both camera and light perspectives, uses depth from current and previous frames, and renders only objects unoccluded in either frame.[93] The dual-frame approach prevents missing newly visible objects while exploiting temporal coherence. Configuration requires enabling GPU Resident Drawer in Project Settings and compute shader support on target hardware.

Unreal Engine

Unreal Engine employs hardware occlusion queries as the default dynamic occlusion method. Unreal Engine's approach to occlusion culling is dynamic by default, designed to work out-of-the-box with minimal setup, while providing more static, pre-computed options for targeted optimization.[94]

Default Culling Methods (View Frustum & HOQ)

By default, every Unreal Engine project uses a combination of View Frustum Culling and dynamic Hardware Occlusion Queries (HOQ). The system issues visibility checks to the GPU each frame per-Actor, using the scene depth buffer for queries.[95] This means that occlusion culling is always active and works for both static and dynamic objects without requiring any manual baking process. The engine automatically issues queries to the GPU for objects within the view frustum to determine if they are occluded by other objects closer to the camera.

This enables longer view distances compared to max draw distance settings and works for movable and non-movable Actors supporting opaque and masked blend modes. The inherent one-frame latency can cause "popping" with rapid camera movement—objects suddenly appearing as visibility predictions lag actual view changes.

Dynamic Occlusion System

Unreal's dynamic system relies on the scene's depth buffer to resolve visibility. It offers several methods:

  • Hardware Occlusion Queries: The default method, which queries the GPU for pixel visibility.
  • Hierarchical Z-Buffer (HZB) Occlusion: An alternative that can be enabled with the console command `r.HZBOcclusion 1`. HZB is often faster but more conservative, meaning it may cull slightly fewer objects than standard HOQs.[96]

Precomputed Visibility Volumes

For platforms where dynamic queries are too performance-intensive, such as mobile or the Nintendo Switch, developers can opt-in to a baked solution by placing Precomputed Visibility Volumes in the level.[97] Within the bounds of these volumes, visibility is pre-calculated during the lighting build. At runtime, the engine uses this static data instead of issuing dynamic queries, trading runtime GPU cost for increased memory usage and pre-computation time.[98]

Cull Distance Volumes

These volumes provide a simple form of contribution culling. Designers can place them in a level and define rules based on object size, such that objects smaller than a certain threshold are culled when the camera is beyond a specified distance.[99] This is useful for optimizing large, detailed environments by removing insignificant clutter at a distance.

Software Occlusion Queries for Mobile

Unreal Engine provides a dedicated software occlusion system for mobile platforms. For mobile platforms, Unreal provides Software Occlusion Queries that rasterize the scene on the CPU using designated Actor LODs as occluders.[100] Developers can enable this feature and then, for specific Static Meshes, designate a low-poly LOD to serve as a CPU-side occluder mesh. This approach delivers single-frame latency versus two frames for hardware queries and works conservatively on any mobile device. Setup involves selecting which LOD of each Static Mesh serves as the occluder mesh through the Static Mesh Editor, with medium-to-large meshes recommended.

Godot Engine

Godot Engine implements occlusion culling in Godot 4.x using the Embree library for CPU-based rasterization—a software raytracing library from Intel. The system bakes simplified representations of static geometry using OccluderInstance3D nodes and tests AABBs against occluder shapes at runtime. Setup involves enabling Occlusion Culling in Project Settings Rendering section, creating occluders through automatic baking or manual authoring, and baking the occlusion data. The technique proves most effective for indoor scenes with many small rooms and particularly benefits the Mobile renderer which lacks depth prepass.

Non-Rendering Applications

While occlusion culling is primarily a rendering optimization, the visibility information it generates is a fundamental piece of world-state data that can be leveraged by other game systems to create a holistic, cross-system optimization strategy.

Influencing AI Behavior

Visibility is a cornerstone of Artificial Intelligence (AI) in games. The information from the culling system can be used to determine if an AI agent is visible to the player. Based on this, the AI can make tactical decisions, such as taking cover when visible or performing less computationally expensive behaviors when hidden.[101]

Optimizing Physics and Animation

If an object is determined to be fully occluded for a period of time, its complex physics simulations can be simplified or put into a "sleep" state, saving a great deal of CPU processing power. Similarly, detailed animations, such as cloth simulation or complex character rigs, can be disabled or switched to a much simpler state when the object is not in view.[102]

Reducing Network Traffic

In multiplayer games, visibility data can be a powerful tool for optimizing network bandwidth. There is often no need to send frequent, high-precision position and state updates for a player or object that is completely occluded from another player's view. By reducing the update frequency for occluded entities, the overall network traffic can be significantly lowered, leading to a smoother online experience.[103]

Best Practices

When to Use Occlusion Culling

The conditions where occlusion culling proves most beneficial include:

  • Indoor environments with clear room structures
  • Dense urban areas with buildings occluding each other
  • Complex geometry exceeding 1,000 triangles per object
  • High depth complexity with many overlapping layers
  • GPU-bound applications suffering from overdraw[104]

Conversely, situations where occlusion culling should be avoided:

  • Open worlds with low occlusion see minimal gains
  • Simple geometry under 200-300 triangles incurs more overhead than savings
  • Procedurally generated or highly dynamic scenes cannot leverage precomputed approaches
  • CPU-bound scenarios with fast GPUs may see performance decrease from adding culling overhead[105]

Implementation Guidelines

Profile first to confirm GPU-bound scenarios before implementing occlusion culling. Always combine with frustum culling for multiplicative gains. Use hierarchical testing by grouping objects and testing large nodes first to reduce query counts. Exploit temporal coherence by reusing previous frame results—objects visible last frame likely remain visible.[106]

Pipeline processing by working on frame N while rendering frame N+1 eliminates stalls. Conservative testing prevents missing visible objects—false negatives cause obvious artifacts while false positives merely reduce efficiency.

For Unity, mark static objects appropriately and tune cell sizes to balance accuracy versus memory. For Unreal, monitor stat initviews and combine hardware queries with distance culling and precomputed visibility as appropriate.[107]

For VR specifically, prioritize zero-latency techniques like software occlusion or precomputed visibility over hardware queries to avoid pop-in artifacts that prove particularly jarring in stereoscopic immersive experiences.[108]

Future Directions

The landscape of occlusion culling continues evolving rapidly as of 2024-2025, with several clear trends reshaping implementation.

GPU-Driven Rendering

The industry migration from CPU to GPU-driven culling represents the most significant shift, with compute shaders and mesh shaders moving visibility determination entirely to the GPU.[109] This eliminates CPU-GPU synchronization overhead, leverages massively parallel GPU architectures, and frees limited CPU resources for gameplay logic and physics.

Mesh Shader Integration

Mesh shader integration enables unprecedented culling granularity. Rather than culling entire meshes or objects, per-meshlet culling at 32-256 triangle granularity provides fine-grained efficiency.[110] The performance benefits prove substantial—40-48% improvements documented across multiple shipping titles—but hardware requirements remain a constraint. As of 2025, mesh shaders require NVIDIA Turing/RTX 2000+ (2018), AMD RDNA2+ (2020), or Intel Arc, limiting deployment to relatively recent hardware.

AR Depth Sensing Democratization

For AR specifically, depth sensing democratization represents the major development. Google's ARCore Depth API release from beta in June 2020 brought real-world occlusion to 200+ million devices using monocular depth estimation—capturing multiple images as devices move and calculating depth through motion parallax without requiring Time-of-Flight sensors.[111] This software-based approach works on standard mobile cameras, though it fuses ToF data when available on premium devices.

Neural Networks and Machine Learning

The future trajectory involves neural networks and machine learning for improved depth estimation and visibility prediction. Research demonstrates deep learning models predicting depth from single images with increasing accuracy, potentially replacing or augmenting physical depth sensors. However, real-time performance for complex dynamic scenes remains a research challenge as of 2025—practical deployment requires models running in milliseconds on mobile GPUs, a constraint current approaches struggle to meet.

See Also

References

  1. Unity Technologies. "Occlusion Culling - Unity Manual". https://docs.unity3d.com/Manual/OcclusionCulling.html
  2. Umbra 3D. "Introduction to Occlusion Culling". Medium. https://medium.com/@Umbra3D/introduction-to-occlusion-culling-3d6cfb195c79
  3. Road to VR. "Umbra Positioning Occlusion Culling Tech for 120 FPS VR Gaming". https://www.roadtovr.com/umbra-software-occlusion-culling-120-fpt-virtual-reality-gaming/
  4. BytePlus. "Real-time rendering in AR for VR: Techniques & insights". https://www.byteplus.com/en/topic/240382
  5. Unity Technologies. "Optimizing your VR/AR Experiences – Unity Learn Tutorial". https://learn.unity.com/tutorial/optimizing-your-vr-ar-experiences
  6. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  7. Road to VR. "Umbra Positioning Occlusion Culling Tech for 120 FPS VR Gaming". https://www.roadtovr.com/umbra-software-occlusion-culling-120-fpt-virtual-reality-gaming/
  8. Game Developer. "Occlusion Culling Algorithms". https://www.gamedeveloper.com/programming/occlusion-culling-algorithms
  9. Unity Technologies. "Optimizing your VR/AR Experiences – Unity Learn Tutorial". https://learn.unity.com/tutorial/optimizing-your-vr-ar-experiences
  10. IT Carlow. "Visibility Processing". https://glasnost.itcarlow.ie/~powerk/GeneralGraphicsNotes/Visibility/VisibilityProcessing.htm.old
  11. IT Carlow. "Visibility Processing". https://glasnost.itcarlow.ie/~powerk/GeneralGraphicsNotes/Visibility/VisibilityProcessing.htm.old
  12. NVIDIA. "Chapter 29. Efficient Occlusion Culling". GPU Gems. https://developer.nvidia.com/gpugems/gpugems/part-v-performance-and-practicalities/chapter-29-efficient-occlusion-culling
  13. NVIDIA. "Chapter 29. Efficient Occlusion Culling". GPU Gems. https://developer.nvidia.com/gpugems/gpugems/part-v-performance-and-practicalities/chapter-29-efficient-occlusion-culling
  14. Wikipedia. "Potentially visible set". https://en.wikipedia.org/wiki/Potentially_visible_set
  15. VisCircle 3D. "What you should know about Visible Surface Determination in Quake". https://viscircle.com/what-you-should-know-about-visible-surface-determination-in-quake/
  16. NVIDIA. "Chapter 6. Hardware Occlusion Queries Made Useful". GPU Gems 2. https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-6-hardware-occlusion-queries-made-useful
  17. NVIDIA. "Chapter 29. Efficient Occlusion Culling". GPU Gems. https://developer.nvidia.com/gpugems/gpugems/part-v-performance-and-practicalities/chapter-29-efficient-occlusion-culling
  18. NVIDIA. "Chapter 6. Hardware Occlusion Queries Made Useful". GPU Gems 2. https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-6-hardware-occlusion-queries-made-useful
  19. Wikipedia. "Umbra (3D technology company)". https://en.wikipedia.org/wiki/Umbra_(3D_technology_company)
  20. Road to VR. "Umbra Positioning Occlusion Culling Tech for 120 FPS VR Gaming". https://www.roadtovr.com/umbra-software-occlusion-culling-120-fpt-virtual-reality-gaming/
  21. Wikipedia. "Umbra (3D technology company)". https://en.wikipedia.org/wiki/Umbra_(3D_technology_company)
  22. Game Developer. "Occlusion Culling Algorithms". https://www.gamedeveloper.com/programming/occlusion-culling-algorithms
  23. Intel. "Software Occlusion Culling". https://www.intel.com/content/www/us/en/developer/articles/technical/software-occlusion-culling.html
  24. NVIDIA. "Introduction to Turing Mesh Shaders". https://developer.nvidia.com/blog/introduction-turing-mesh-shaders/
  25. Vulkan Guide. "Compute based Culling". https://vkguide.dev/docs/gpudriven/compute_culling/
  26. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  27. Wikipedia. "Occlusion culling". https://en.wikipedia.org/wiki/Occlusion_culling
  28. Eurographics. "Occlusion Culling Methods". https://diglib.eg.org/bitstream/handle/10.2312/egst20011049/oc-star.pdf
  29. NVIDIA. "Chapter 6. Hardware Occlusion Queries Made Useful". GPU Gems 2. https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-6-hardware-occlusion-queries-made-useful
  30. Epic Games. "Visibility and Occlusion Culling in Unreal Engine". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine
  31. NVIDIA. "Chapter 6. Hardware Occlusion Queries Made Useful". GPU Gems 2. https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-6-hardware-occlusion-queries-made-useful
  32. NVIDIA. "Chapter 29. Efficient Occlusion Culling". GPU Gems. https://developer.nvidia.com/gpugems/gpugems/part-v-performance-and-practicalities/chapter-29-efficient-occlusion-culling
  33. Epic Games. "Visibility and Occlusion Culling in Unreal Engine". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine
  34. Self Shadow. "Practical, Dynamic Visibility for Games". https://blog.selfshadow.com/publications/practical-visibility/
  35. Self Shadow. "Practical, Dynamic Visibility for Games". https://blog.selfshadow.com/publications/practical-visibility/
  36. Medium. "Two-Pass Occlusion Culling". https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501
  37. Medium. "Two-Pass Occlusion Culling". https://medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501
  38. Intel. "Software Occlusion Culling". https://www.intel.com/content/www/us/en/developer/articles/technical/software-occlusion-culling.html
  39. Unity Technologies. "Optimizing your VR/AR Experiences – Unity Learn Tutorial". https://learn.unity.com/tutorial/optimizing-your-vr-ar-experiences
  40. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  41. Intel. "Masked Software Occlusion Culling". https://www.intel.com/content/dam/develop/external/us/en/documents/masked-software-occlusion-culling-779241.pdf
  42. Intel. "Masked Software Occlusion Culling". https://www.intel.com/content/dam/develop/external/us/en/documents/masked-software-occlusion-culling-779241.pdf
  43. Epic Games. "Software Occlusion Queries for Mobile". https://dev.epicgames.com/documentation/en-us/unreal-engine/software-occlusion-queries-for-mobile?application_version=4.27
  44. Wikipedia. "Potentially visible set". https://en.wikipedia.org/wiki/Potentially_visible_set
  45. Panda3D Manual. "Portal Culling". https://docs.panda3d.org/1.10/python/programming/render-attributes/occlusion-culling/portal-culling
  46. Panda3D Manual. "Portal Culling". https://docs.panda3d.org/1.10/python/programming/render-attributes/occlusion-culling/portal-culling
  47. Wikipedia. "Potentially visible set". https://en.wikipedia.org/wiki/Potentially_visible_set
  48. VisCircle 3D. "What you should know about Visible Surface Determination in Quake". https://viscircle.com/what-you-should-know-about-visible-surface-determination-in-quake/
  49. NVIDIA. "Chapter 29. Efficient Occlusion Culling". GPU Gems. https://developer.nvidia.com/gpugems/gpugems/part-v-performance-and-practicalities/chapter-29-efficient-occlusion-culling
  50. NVIDIA. "Chapter 29. Efficient Occlusion Culling". GPU Gems. https://developer.nvidia.com/gpugems/gpugems/part-v-performance-and-practicalities/chapter-29-efficient-occlusion-culling
  51. NVIDIA. "Introduction to Turing Mesh Shaders". https://developer.nvidia.com/blog/introduction-turing-mesh-shaders/
  52. Vulkan Guide. "Compute based Culling". https://vkguide.dev/docs/gpudriven/compute_culling/
  53. Unity. "Use GPU occlusion culling". https://docs.unity3d.com/Packages/[email protected]/manual/gpu-culling.html
  54. Unity Technologies. "Optimizing your VR/AR Experiences – Unity Learn Tutorial". https://learn.unity.com/tutorial/optimizing-your-vr-ar-experiences
  55. Road to VR. "Umbra Positioning Occlusion Culling Tech for 120 FPS VR Gaming". https://www.roadtovr.com/umbra-software-occlusion-culling-120-fpt-virtual-reality-gaming/
  56. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  57. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  58. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  59. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  60. Meta for Developers. "Occlusion Culling for Mobile VR - Part 2: Moving Cameras and Other Insights". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-part-2-moving-cameras-and-other-insights/
  61. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  62. Meta for Developers. "Occlusion Culling for Mobile VR - Part 2: Moving Cameras and Other Insights". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-part-2-moving-cameras-and-other-insights/
  63. Epic Games. "Visibility and Occlusion Culling in Unreal Engine". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine
  64. Unity Technologies. "Optimizing your VR/AR Experiences – Unity Learn Tutorial". https://learn.unity.com/tutorial/optimizing-your-vr-ar-experiences
  65. Meta for Developers. "Occlusion Culling for Mobile VR - Part 2: Moving Cameras and Other Insights". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-part-2-moving-cameras-and-other-insights/
  66. Meta for Developers. "Occlusion Culling for Mobile VR - Part 2: Moving Cameras and Other Insights". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-part-2-moving-cameras-and-other-insights/
  67. Meta for Developers. "Occlusion Culling for Mobile VR - Part 2: Moving Cameras and Other Insights". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-part-2-moving-cameras-and-other-insights/
  68. MDPI. "Occlusion Handling for Mobile AR Applications in Indoor and Outdoor Scenarios". https://www.mdpi.com/1424-8220/23/9/4245
  69. Google Developers. "Depth adds realism – ARCore Depth API documentation". https://developers.google.com/ar/develop/depth
  70. Google Developers. "Depth adds realism – ARCore Depth API documentation". https://developers.google.com/ar/develop/depth
  71. Apple Developer Documentation. "Occluding virtual content with people – ARKit". https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people
  72. Medium. "Occlusion Culling in Augmented Reality". https://medium.com/@ishtian_rev/occlusion-culling-in-augmented-reality-c1ee433598
  73. Unity Technologies. "Optimizing your VR/AR Experiences – Unity Learn Tutorial". https://learn.unity.com/tutorial/optimizing-your-vr-ar-experiences
  74. Medium. "Occlusion Culling in Augmented Reality". https://medium.com/@ishtian_rev/occlusion-culling-in-augmented-reality-c1ee433598
  75. Learn XR Blog. "Quest 3 Mixed Reality with Meta Depth API – New Occlusion Features!" https://blog.learnxr.io/xr-development/quest-3-mixed-reality-with-meta-depth-api-new-occlusion-features
  76. Learn XR Blog. "Quest 3 Mixed Reality with Meta Depth API – New Occlusion Features!" https://blog.learnxr.io/xr-development/quest-3-mixed-reality-with-meta-depth-api-new-occlusion-features
  77. Medium. "Occlusion Culling in Augmented Reality". https://medium.com/@ishtian_rev/occlusion-culling-in-augmented-reality-c1ee433598
  78. Intel. "Software Occlusion Culling". https://www.intel.com/content/www/us/en/developer/articles/technical/software-occlusion-culling.html
  79. Kitware. "WebGPU Occlusion Culling in VTK". https://www.kitware.com/webgpu-occlusion-culling-in-vtk/
  80. Prographers. "Precomputed Culling for Unity 3D". https://prographers.com/precomputed-culling
  81. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  82. Intel. "Software Occlusion Culling". https://www.intel.com/content/www/us/en/developer/articles/technical/software-occlusion-culling.html
  83. Vulkan Guide. "Compute based Culling". https://vkguide.dev/docs/gpudriven/compute_culling/
  84. Intel. "Software Occlusion Culling". https://www.intel.com/content/www/us/en/developer/articles/technical/software-occlusion-culling.html
  85. Unity Technologies. "Occlusion Culling - Unity Manual". https://docs.unity3d.com/Manual/OcclusionCulling.html
  86. Wikipedia. "Umbra (3D technology company)". https://en.wikipedia.org/wiki/Umbra_(3D_technology_company)
  87. Unity Technologies. "Occlusion Culling - Unity Manual". https://docs.unity3d.com/Manual/OcclusionCulling.html
  88. Unity Learn. "Working with Occlusion Culling". https://learn.unity.com/tutorial/working-with-occlusion-culling
  89. Unity Learn. "Working with Occlusion Culling". https://learn.unity.com/tutorial/working-with-occlusion-culling
  90. Unity Technologies. "Occlusion Culling - Unity Manual (5.5)". https://docs.unity3d.com/550/Documentation/Manual/OcclusionCulling.html
  91. YouTube. "Occlusion Portals in Unity". https://www.youtube.com/watch?v=DRsGt4OFQvU
  92. Unity Technologies. "Occlusion Culling - Unity Manual (5.5)". https://docs.unity3d.com/550/Documentation/Manual/OcclusionCulling.html
  93. Unity. "Use GPU occlusion culling". https://docs.unity3d.com/Packages/[email protected]/manual/gpu-culling.html
  94. Epic Games. "Visibility and Occlusion Culling in Unreal Engine". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine
  95. Epic Games. "Visibility and Occlusion Culling in Unreal Engine". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine
  96. Unreal Engine Forums. "Occlusion Culling Sync Points cause poor performance in even simple scenes". https://forums.unrealengine.com/t/occlusion-culling-sync-points-cause-poor-performance-in-even-simple-scenes/750084
  97. Epic Games. "Visibility and Occlusion Culling in Unreal Engine". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine
  98. Epic Games. "Visibility and Occlusion Culling Reference". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-reference-in-unreal-engine
  99. Epic Games. "Visibility and Occlusion Culling in Unreal Engine". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine
  100. Epic Games. "Software Occlusion Queries for Mobile". https://dev.epicgames.com/documentation/en-us/unreal-engine/software-occlusion-queries-for-mobile?application_version=4.27
  101. Umbra 3D. "Introduction to Occlusion Culling". Medium. https://medium.com/@Umbra3D/introduction-to-occlusion-culling-3d6cfb195c79
  102. Umbra 3D. "Introduction to Occlusion Culling". Medium. https://medium.com/@Umbra3D/introduction-to-occlusion-culling-3d6cfb195c79
  103. Umbra 3D. "Introduction to Occlusion Culling". Medium. https://medium.com/@Umbra3D/introduction-to-occlusion-culling-3d6cfb195c79
  104. Wayline. "Indie Game Devs: Stop Sacrificing Performance for Visuals! (Occlusion Culling Guide)". https://www.wayline.io/blog/indie-game-devs-occlusion-culling-guide
  105. Unity Technologies. "Occlusion Culling - Unity Manual". https://docs.unity3d.com/Manual/OcclusionCulling.html
  106. NVIDIA. "Chapter 6. Hardware Occlusion Queries Made Useful". GPU Gems 2. https://developer.nvidia.com/gpugems/gpugems2/part-i-geometric-complexity/chapter-6-hardware-occlusion-queries-made-useful
  107. Epic Games. "Visibility and Occlusion Culling in Unreal Engine". https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine
  108. Meta for Developers. "Occlusion Culling for Mobile VR - Part 1: Developing a Custom Solution". https://developers.meta.com/horizon/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/
  109. Vulkan Guide. "Compute based Culling". https://vkguide.dev/docs/gpudriven/compute_culling/
  110. NVIDIA. "Introduction to Turing Mesh Shaders". https://developer.nvidia.com/blog/introduction-turing-mesh-shaders/
  111. MDPI. "Occlusion Handling for Mobile AR Applications in Indoor and Outdoor Scenarios". https://www.mdpi.com/1424-8220/23/9/4245

Cite error: <ref> tag defined in <references> has no name attribute.