Intel XeSS Upscaler Plugin Now Available in Unreal Engine
After announcing support for Unreal Engine in March, Intel finally started integrating XeSS into Unreal Engine 4 and 5. plugin options. This allows developers to easily integrate XeSS into their Unreal Engine projects without having to manually integrate the XeSS SDK.
The XeSS integration also enables full Unreal Engine compatibility with all four upscaling models from Nvidia, AMD and Intel, including DLSS, NIS, FSR and XeSS. Unreal Engine’s own upscaling solution is also supported.
The XeSS plugin for Unreal Engine works with versions 4.26, 4.27 and 5.0. The plugin is only available on Github for now, but will be coming to the Unreal Engine Marketplace soon. AMD’s FSR When Nvidia’s DLSS.
Intel’s plugin replaces Unreal Engine’s Temporal Anti-Aliasing (TAA) with XeSS, applies an upscaler after the rasterization and lighting stages are complete in the rendering pipeline, and consolidates at the start of the post-processing stage. This way, XeSS only upscales the necessary parts of the rendering pipeline, while other parts of the game (such as the HUD) are rendered at their native resolution for better image quality.
For starters, Xe Super Sampling (XeSS) is Intel’s temporal resolution upscaling and competes directly with AMD’s FidelityFX Super Resolution (FSR) and Nvidia’s Deep-Learning Super Sampling (DLSS). From a design perspective, XeSS works closely with DLSS as an AI-generated upscaler that uses an AI-trained network to upscale images. However, unlike DLSS, XeSS has different modes for operation on different GPU types.
These two modes include a “high level” version running on Intel’s XMX AI cores, found only in Arc Alchemist GPUs, and Nvidia and AMD.
I don’t know much about these modes and their actual quality and performance differences, but I do know that the DP4a model uses a different trained network compared to the main version running on Intel’s XMX cores. I know Just because it’s different doesn’t mean it’s better or worse, but I wouldn’t be surprised if the DP4a version included some performance and visual sacrifices.
DP4a must run on GPU shader cores that use INT8 operations, which are much slower than Intel’s XMX cores. XMX natively supports INT8 so these operations can be processed more quickly.