[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Re: WebGL spec modifications for D3D



Yep.

I'm sure that D3D renderers can take advantage of the depth-only+beauty
approach - most modern game engines use this kind of technique.  So
there must be a way that Z invariance can be guaranteed in D3D.

It is possible that D3D somehow magically guarantees this without
'invariant' or 'ftransform' - in which case it would be safe for a WebGL
implementation based on D3D to simply ignore those directives.

But that is most certainly NOT the case for true OpenGL drivers (trust
me...I've seen what happened before ftransform was added into Cg!) - so
it is utterly, 100% essential that we have either 'invarient' or
'ftransform' or some other cast-iron guarantee that shader optimization
won't change even the least significant bit of (at a minimum) the matrix
operations and clipping that drive glPosition in the Vertex shader. 
That's the minimum acceptable capability - effectively the same
guarantee as ftransform.

Invarience in XYZ is really a barest-minimum requirement.  There are
other nice algorithms that rely on invariance in things like texture
coordinates.   This is so critical that I'd MUCH prefer that we simply
not support hardware that doesn't have a viable GLSL implementation than
to screw up the most vital algorithm in all modern 3D engines for the
sake of (let's be honest) crappy Intel graphics that quite honestly
don't have the horsepower to do 'interesting' stuff anyway.  They could
support OpenGL and shader invariance if they were motivated to do so -
failure to do it is just laziness...well, maybe it's time to punish that.

If we can do no better, let us at least preserve some sort of invariance
directive and provide a testable flag to tell us when it's not actually
guaranteed so that developers who care can fall back on something else
or simply punt on useless hardware.

Just in case anyone out there doubts the importance of this - let me
provide a brief primer on the subject (feel free to skip reading at this
point if you already grok why I'm so upset about this) :

One of the most important modern techniques is to render a super-simple
depth-only pass first - then to render a "beauty pass" second.  The
depth-only pass uses the simplest possible shaders, position-only vertex
attributes and writes only to the Z buffer to save video RAM bandwidth. 
The beauty pass has everything turned up to the max - and relies on the
fact that most graphics hardware can skip over Z-fail fragments without
even running the shader.  It means that no matter how complex your
fragment shader, you pays the price to render each screen pixel once. 
That's a massively important thing!  If you have occlusion culling (I
guess we don't in WebGL?), then you can make even more savings by
spotting objects that didn't hit any pixels during the depth-only pass -
and not rendering those at all during beauty...also render simple
'proxy' geometry for geometrically complex objects to see if they can be
skipped during beauty.   The net result is a massive speedup for
sophisticated renderers with complex models.

But if there is a difference in even the least significant bit of Z
between those two passes, the image will break up and be unusable.  To
get the benefits of this approach, you need different vertex shaders
between depth-only and beauty passes because you don't pass texture
coordinates, colors, normals, etc to the depth-only shader - and this
change in the source code of the shader will result in different
optimisations happening in the GLSL compiler - which in turn will result
in roundoff errors...which screws up the entire thing.

Multipass lighting also requires perfect-to-the-least-significant-bit
alignment between passes - and that's the way to get sophisticated
multiple light source rendering to happen cheaply.

But increasingly sophisticated algorithms that I'm using these days rely
on invariance in other parts of the vertex shader...the way I apply
bullet holes and blood splatter to geometry in first person shooters -
for example - relies on invariance in texture calculations too.  I'm
sure other developers are finding equally sneaky ways to exploit shaders
that also rely on LSB-perfect multipass.

  -- Steve


Mark Callow wrote:
> Removal of ftransform (because it was there for matching fixed function
> transformation) yet still wanting o support multi-pass algorithms was
> one of the drivers for the addition of the invariant keyword.
>
> Regards
>
>     -Mark
>
>
> On 09/07/2010 07:26, Kenneth Russell wrote:
>   
>> On Sat, Jun 26, 2010 at 5:17 PM, Steve Baker <steve@sjbaker.org> wrote:
>>   
>>     
>>> YIKES!!!
>>>
>>> Lack of shader invarience can be a major headache in many common
>>> multipass algorithms.  Without the 'invariant' keyword, we're going to
>>> need something like the 'ftransform' function (which I think was
>>> obsoleted in GLSL 1.4 and GLES 1.0).
>>>
>>> Without EITHER 'invariant' OR 'ftransform', some rather important
>>> algorithms become impossible - and that would be really bad news!
>>>     
>>>       
>> Sorry for the long delay in replying.
>>
>> The removal of the invariant enforcement was recommended by
>> TransGaming based on its not being implementable on D3D9. Perhaps
>> someone from TG could comment more on the exact issue and what is
>> possible to implement. I agree that its removal seems to preclude
>> multi-pass rendering algorithms in WebGL 1.0.
>>
>> -Ken
>>
>>   
>>     

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: