Nvidia quite a while ago presented the concept of vx* (gi/ao) and the concept in a nutshell is to calculate a scene voxelization, make a clipmap, and evaluate voxel cone tracing against that clipmap.
This is a really good idea as it requires in essence no preprocessing of the scene, and with some tradeoff scales to very large scenes (and not just prepared/limited domains).
Unfortunately it's difficult to implement yourself for two reasons:
- Voxelizing a real-time scene on the GPU is kinda slow (even if you do various trickery with the geometry shader).
- Casting into that voxelization is less than trivial as the most efficient traversal algorithms are difficult to implement, inefficient or impossible to express in a shader etc.
VXGI is also vendor specific, which makes it unsuitable to where WebGL should be.
I think things don't have to be this way. It should not be a major obstacle for GPUs to provide primitives to make implementing VXGI/VXAO easy and as fast as the GPU can afford it. All that's required is a standardized API to feed a scene (without duplication if possible) into a voxel clipmap builder, and a standardized API to query that data structure.
There are already some forrays into standardized data structure handling with sparse textures. Would it be expecting to much that GPU vendors get together and start designing a standard API to deal with one of the most flexible and appealing approaches to GI/AO?