Skip to content

Vulkan for XR tutorial using Simple Engine#335

Open
gpx1000 wants to merge 7 commits intoKhronosGroup:mainfrom
gpx1000:Vulkan-OpenXR
Open

Vulkan for XR tutorial using Simple Engine#335
gpx1000 wants to merge 7 commits intoKhronosGroup:mainfrom
gpx1000:Vulkan-OpenXR

Conversation

@gpx1000
Copy link
Copy Markdown
Contributor

@gpx1000 gpx1000 commented Mar 17, 2026

Add engine integration sections for spatial computing chapters 10-20
Add "Incorporating into the Engine" sections demonstrating practical implementation of Variable Rate Shading, Canted Displays, CAVE Architecture, Warp and Blend, LightField Theory, Plenoptic Synthesis, Scene Understanding, ML Inference, Semantic Occlusion, Platform Divergence, and Spatial Diagnostics CI/CD. Include C++ code examples for renderer_core.cpp, renderer_rendering.cpp, and engine.cpp showing feature enablement, pipeline setup, and compute passes using Vulkan 1.4 and Slang shaders.

This is a WIP currently, doing this to help test the CI scripts.

gpx1000 added 7 commits March 16, 2026 22:37
Add "Incorporating into the Engine" sections demonstrating practical implementation of Variable Rate Shading, Canted Displays, CAVE Architecture, Warp and Blend, LightField Theory, Plenoptic Synthesis, Scene Understanding, ML Inference, Semantic Occlusion, Platform Divergence, and Spatial Diagnostics CI/CD. Include C++ code examples for renderer_core.cpp, renderer_rendering.cpp, and engine.cpp showing feature enablement, pipeline setup, and compute passes using Vulkan 1.4 and Slang shaders.
…igation

Add navigation structure for new OpenXR-Vulkan spatial computing guide covering 20 chapters: OpenXR-Vulkan handshake, runtime-owned swapchains, dynamic rendering, predictive frame loop, late latching, action spaces, Slang shaders, quad-views with foveated rendering, variable rate shading, canted displays, CAVE architecture, warp and blend, lightfield theory, plenoptic synthesis, scene understanding, ML inference, semantic occlusion, platform divergence, and spatial diagnostics. Each chapter includes introduction, technical deep-dives, and engine integration sections.
…ration

Replace openxr.hpp C++ wrapper with native openxr.h C API throughout xr_context. Add OpenXR::OpenXR alias for openxr_loader target compatibility. Fix memory_pool.cpp source path and Assets copy destination. Load Vulkan extension function pointers explicitly via xrGetInstanceProcAddr. Implement proper LUID extraction from OpenXR-selected physical device using VkPhysicalDeviceIDProperties. Replace XrGuidMSFT with XrUuidMSFT for spatial mesh structure. Update all OpenXR handle types from C++ wrappers to native C types (XrInstance, XrSession, XrSpace, XrAction, XrSwapchain). Convert all OpenXR API calls from method-style to function-style. Initialize views vector with proper XR_TYPE_VIEW structure type.
…llation

Add vcpkg PATH resolution using VCPKG_INSTALLATION_ROOT. Add error checking after simple_engine and openxr-loader installation steps. Implement vcpkg caching in GitHub Actions workflow to speed up CI builds. Consolidate vcpkg environment setup into separate step with binary cache configuration.
…Properties2/getFeatures2

Replace manual pNext chaining with type-safe templated Vulkan-Hpp methods for querying PhysicalDeviceIDProperties and PhysicalDevicePresentBarrierFeaturesNV. Update code examples in hardware alignment, CAVE architecture, and renderer_core.cpp to use getProperties2<>/getFeatures2<> with compile-time type specification and .get<>() accessor pattern.
@SaschaWillems
Copy link
Copy Markdown
Collaborator

That's a pretty comprehensive PR 👍🏻

Will prob. take some time to review this, but at first glance: Can we lower the baseline from Vulkan 1.4 to 1.3? 1.4 didn't add that much, but it has very limited support on mobile.

Copy link
Copy Markdown
Collaborator

@SaschaWillems SaschaWillems left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a first quick review. I don't have much experience with OpenXR, but this looks good to me and was easy to follow.

Aside from the comments, I'd like to see links to existing documentation like the tutorial, the spec and/or the guide, making it easier for people to discover relevant adjacent information.

* **Dynamic Resolution**: To maintain a steady 90Hz, the engine might need to drop the resolution of peripheral views instantly.
* **No Rigid State**: By using `vkCmdBeginRendering` directly on our XR swapchain images, we avoid the heavy overhead and rigid state of legacy `VkRenderPass` and `VkFramebuffer` objects.

== Enabling Vulkan 1.4 Features in RAII
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paragraph talks about enabling Vulkan 1.4 features, but the code snipped only uses 1.2 and 1.3 feature strucuts.


[source,cpp]
----
XrSwapchainCreateInfo createInfo{XR_TYPE_SWAPCHAIN_CREATE_INFO};
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the existing tutorial uses designated initializers, I think these new chapters should also make use of that.

Variable Rate Shading allows us to decoupling the shading rate from the pixel rate. Instead of running a fragment shader once for every pixel, we can tell the hardware to run it once for a group of pixels (e.g., a 2x2 or 4x4 tile). This "coarse shading" significantly reduces the **ALU** (Arithmetic Logic Unit) load on the GPU, which is often the primary bottleneck in complex spatial shaders.

We will focus on two primary strategies:
1. **Static Peripheral Optimization**: Reducing shading rates at the edges of the lens where optical distortion and chromatic aberration already obscure detail.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing a line-break. Without that, the bullet points won't be rendered as a list.

1. **Static Peripheral Optimization**: Reducing shading rates at the edges of the lens where optical distortion and chromatic aberration already obscure detail.
2. **Dynamic Gaze-Driven Shading**: Using eye-tracking telemetry to center the high-resolution region wherever the user is currently looking.

By the end of this chapter, you will understand how to integrate the **VK_KHR_fragment_shading_rate** extension (now part of the Vulkan 1.4 core) into your spatial pipeline and how to manage shading rate maps that update in real-time.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that's correct. VK_KHR_fragment_shading_rate has never been promoted to core, see https://docs.vulkan.org/refpages/latest/refpages/source/VK_KHR_fragment_shading_rate.html

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the following chapters talk about fragment density map: Did you mean that extension instead (which still is EXT not KHR)?

----
// Updating the shading rate image based on projected gaze coordinates
void updateShadingRateMap(vk::raii::CommandBuffer& cmd, const glm::vec2& gazeCenter) {
// We use a simple compute shader to fill a small R8_UINT texture.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where can readers find that compute shader? It feels like it contains an important part of this chapter, so maybe put the relevant code parts in here?


== HDR10 and 10-bit Color: Beyond the Screen

Desktop GPUs have the bandwidth to support **High Dynamic Range (HDR)**. In XR, this is essential for physical realism. A virtual sun should be thousands of times brighter than a virtual candle. Without HDR, the engine must "tone-map" these values into the same narrow range, losing the feeling of scale.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really something that applies to desktop (only)? From my experience HDR support on mobile support is actually better than on most desktop systems, and the packed format mentioned below shouldn't have that much of a bandwidth penalty.


== Leveraging High PCIe Bandwidth

Because we have a high-speed **PCIe 4.0** or **5.0** link between the CPU and GPU, we can push massive amounts of data per frame without bottlenecking.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to rephrase this a bit. "Massive" makes it sound like PCIe has lots of bandwidth, but with high framerates, PCIe can quickly become a bottleneck. Esp. with GPUs/systems that have fewer PCIe lanes.

* **Uncompressed LightFields**: We can upload gigabytes of plenoptic data directly to the GPU without stalling the render loop.
* **Ray Tracing**: We can afford the high overhead of building **Acceleration Structures** (AS) for complex, high-poly environments every frame, enabling real-time spatial reflections and global illumination.

[source,cpp]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code black was prob. supposed to be part of the HDR chapter above.


== The Concept: Tile-Based Rendering (TBR)

Most mobile GPUs (like those from Qualcomm, Arm, or Imagination) use a **Tile-Based Rendering** architecture. Instead of rendering the whole screen at once, the GPU splits the screen into tiny tiles (e.g., 16x16 or 32x32 pixels). It then processes each tile entirely within high-speed, low-power on-chip memory (**LDS** or **SRAM**) before writing the final result back to main memory.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could (or should?) mention dynamic rendering local read.

@SaschaWillems
Copy link
Copy Markdown
Collaborator

SaschaWillems commented Mar 30, 2026

CMake setup for the source fails for me, but maybe I'm doing something wrong (I didn't find build instructions for the OpenXR engine variation). When running CMake from the attachments/openxr_engine folder, I get the following errors:

CMake Error at V:/Vulkan-Docs-Site/Vulkan-Tutorial-review/attachments/simple_engine/CMake/Findtinygltf.cmake:80 (file):
  file failed to create symbolic link
  'V:/Vulkan-Docs-Site/Vulkan-Tutorial-review/attachments/openxr_engine/build/_deps/tinygltf-src/nlohmann/json.hpp':
  Dem Client fehlt ein erforderliches Recht.

Call Stack (most recent call first):
  CMakeLists.txt:22 (find_package)


CMake Error at C:/Program Files/CMake/share/cmake-4.1/Modules/FindPackageHandleStandardArgs.cmake:227 (message):
  Could NOT find KTX (missing: KTX_INCLUDE_DIR KTX_LIBRARY)
Call Stack (most recent call first):
  C:/Program Files/CMake/share/cmake-4.1/Modules/FindPackageHandleStandardArgs.cmake:591 (_FPHSA_FAILURE_MESSAGE)
  V:/Vulkan-Docs-Site/Vulkan-Tutorial-review/attachments/simple_engine/CMake/FindKTX.cmake:55 (find_package_handle_standard_args)
  CMakeLists.txt:23 (find_package)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants