[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] vxgi/vxao



The WebGL WG isn't the place where new features like this will be designed, especially if they impact hardware design. Also standardazing an algorithm like VXGI/AO isn't a thing. Sparse textures / buffers is different because it is exposing the hardware virtual memory features in a way that allows doing many different algorithms.

Also VXGI/AO is extremely expensive and I know of only one game that ships it: "The Tomorrow  Children". More efficient techniques that are still good enough have been published recently. One that I like is "Real-Time Global Illumination Using Precomputed Illuminance Composition with Chrominance Compression".

On Sun, Dec 10, 2017 at 4:53 AM, Florian Bösch <pyalot@gmail.com> wrote:
Nvidia quite a while ago presented the concept of vx* (gi/ao) and the concept in a nutshell is to calculate a scene voxelization, make a clipmap, and evaluate voxel cone tracing against that clipmap.

This is a really good idea as it requires in essence no preprocessing of the scene, and with some tradeoff scales to very large scenes (and not just prepared/limited domains).

Unfortunately it's difficult to implement yourself for two reasons:
  • Voxelizing a real-time scene on the GPU is kinda slow (even if you do various trickery with the geometry shader).
  • Casting into that voxelization is less than trivial as the most efficient traversal algorithms are difficult to implement, inefficient or impossible to express in a shader etc.
VXGI is also vendor specific, which makes it unsuitable to where WebGL should be.

-----

I think things don't have to be this way. It should not be a major obstacle for GPUs to provide primitives to make implementing VXGI/VXAO easy and as fast as the GPU can afford it. All that's required is a standardized API to feed a scene (without duplication if possible) into a voxel clipmap builder, and a standardized API to query that data structure.

There are already some forrays into standardized data structure handling with sparse textures. Would it be expecting to much that GPU vendors get together and start designing a standard API to deal with one of the most flexible and appealing approaches to GI/AO?