[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] vxgi/vxao

On Mon, Dec 11, 2017 at 6:38 PM, Corentin Wallez <cwallez@google.com> wrote:
Also standardazing an algorithm like VXGI/AO isn't a thing.
Why not?
Also VXGI/AO is extremely expensive and I know of only one game that ships it: "The Tomorrow  Children".
It is expensive, but GPUs get faster every year.

More efficient techniques that are still good enough have been published recently. One that I like is "Real-Time Global Illumination Using Precomputed Illuminance Composition with Chrominance Compression".
I think you mean: "Real-Time Global Illumination Using Precomputed Illuminance Composition with Chrominance Compression".

Like many algorithms in this category, they rely on an extremely heavy precomputation that might take considerable time (minutes, hours, days). It's not something you do in realtime or on-line at all. In addition common to most approaches that use some spherical harmonic approximation, they struggle with glossy reflections and can usually only express "slightly glossy". And while the algorithm is certainly more efficient then the somewhat "brute force" approach of actually sampling/tracing the surroundings, it does have these limitations that it'll only apply to mostly completely static scenes with few if any dynamic interactions in them, and that the important aspect of glossy reflection (which makes up a huge part of all materials) usually doesn't work right.


You can characterize Global Illumination algorithms on a spectrum:
  1. Prebake everything -> because obvious, we're trying not to do that
  2. Prebake some things and use a crude transport approximation that brings with it a host of drawbacks -> we are currently here
  3. Come up with a formulation that allows to query the geometry (or a proxy for it) -> we will be there in the future
In the latter category of the querying/tracing the scene there are two approaches. The first is relatively classical raytracing where the actual scene geometry is intersected with rays. There's been people who built hardware acceleration for that. The drawback of that is that it only gives you perfectly shiny reflections. If you want glossy or diffuse, you'll end up doing pathtracing essentially, which introduces noise and it's usually too much noise to be pleasant at 120hz. The second approach is to come up with some kind of proxy structure that usually doesn't work well for perfectly shiny reflections, but works well across a range from diffuse to pretty glossy. In that latter category of proxy structures there are to my knowledge two distinct flavors:
  1. Put the entire scene into a sparse voxel structure and its respective mipmap -> this has been demonstrated to work well,  but it's obviously limited in the domain size. In addition, sparse voxel structures are also expensive to traverse.
  2. Replace the mipmap with a clipmap and get rid of the sparsity. Of course this trades off precision the further away features are, but it does work for large domains.
The one really not solved problem this has is, that it requires rasterizing in 3D. This is seriously expensive. Tracing into the clipmap is simple in concept, but it's a bit awkward to implement.


I'm trying to illustrate that there is a convergence going on where everything is going to converge on "be able to query a more or less realtime presentation of the scene in realtime". All more optimized approaches bring with them a lot of drawbacks, and they're not going to be what we'll use 10-20 years from now.