[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Re: WebGL spec modifications for D3D



I've been following this thread for a bit and just remembered an interesting thing about D3D (which you may already know). The spec is the refrast. Maybe that would hold some insight here. I haven't looked at refrast for any D3D variant in a while so can't say what the behavior is.

- Phil

On Tue, Jul 13, 2010 at 12:29 PM, Daniel Koch <daniel@transgaming.com> wrote:
Hi folks,

The problem is that there is nothing like the "invariant" keyword in D3D9 HLSL (or even D3D10 HLSL for that matter) that I am aware of.
In practise, there must be some form of invariance guaranteed by d3d9 (especially for position), since I know of many games which use multi-pass rendering algorithms which work just fine.  The difficulty lies in figuring out exactly what is guaranteed by D3D9, since we've been unable to find any sort of public documentation or discussion of these issues.  However, even if there is position invariance, this does not provide a mechanism to toggle invariance on and off on a per-variable basis as is required in GLSL.

The closest thing we've been able to find is a SIGGRAPH article on D3D10 from Microsoft. (http://download.microsoft.com/download/f/2/d/f2d5ee2c-b7ba-4cd0-9686-b6508b5479a1/Direct3D10_web.pdf) They briefly allude to this problem in section 5.4:
"We considered several solutions for how to specify invariance requirements in the source code itself, for example, requiring that subroutines be compiled in an invariant fashion even if they are inlined. However, our search ultimately led us to the more traditional route of providing selectable, well-defined optimization levels that must also be respected by the driver compiler."
My assumption is that D3D9 must have had similar requirements.

The version of Cg that was open-sourced is quite archaic at this point.  The ANGLE compiler is open-sourced (https://code.google.com/p/angleproject/) and is based off the 3DLabs GLSL compiler.   It compiles GLSL ES to HLSL9 which is then compiled to D3D9 byte-code using D3DXCompileShader.  A future extension to ANGLE could be to generate D3D9 byte-code directly.  However, even D3D9 bytecode is still an IL and there is no guarantee that the hardware executes those instructions exactly (and I know there are implementations which do compile/optimize this further).

The issue raised about ftransform in CG and GLSL and position invariance was primarily an issue when using fixed function and shaders together, and it was indeed a very common problem. This is also why the "position_invariant" option was added to the ARB_vertex_program assembly extension.  Examples of this occurring in shader-only applications are much more difficult to come by.  

Ideally, an example program which does exhibit invariance issues in webgl (or GLSL or GLSL ES) would be available to demonstrate that this is actually a real problem for webgl.  If that existed, we could verify whether or not ANGLE on D3D9 has such problems, or if it just works.  

Steve Baker: do you have any such examples, or would you be able to put one together which demonstrates this problem?

We are continuing to investigate the guarantees provided by D3D in this area and a concrete test case showing such issues would be invaluable for this.

Thanks,
Daniel

On 2010-07-10, at 1:35 PM, Steve Baker wrote:


Having slept on it, I wonder if there is another way.  I believe that
the following three statements are all true:

* The nVidia Cg compiler is open-sourced:
http://developer.nvidia.com/object/cg_compiler_code.html
* The Cg compiler can also compile GLSL.
* D3D accepts 'machine code' shader programs as an alternative to HLSL
and Cg.

If so, could we not take the OpenSourced nVidia compiler, turn on the
GLSL option and produce a back-end to allow it to generate D3D shader
machine code?

It's probably a stretch - and one or more of my assumptions might be
incorrect - but wouldn't that allow us to run fully compliant GLSL
shaders under D3D with all the wonders of invariance?   ANGLE must be
doing something of the sort to convert GLSL for D3D already.

 -- Steve.

On 2010-07-09, at 9:58 PM, Steve Baker wrote:

Yep.

I'm sure that D3D renderers can take advantage of the depth-only+beauty
approach - most modern game engines use this kind of technique.  So
there must be a way that Z invariance can be guaranteed in D3D.

It is possible that D3D somehow magically guarantees this without
'invariant' or 'ftransform' - in which case it would be safe for a WebGL
implementation based on D3D to simply ignore those directives.

But that is most certainly NOT the case for true OpenGL drivers (trust
me...I've seen what happened before ftransform was added into Cg!) - so
it is utterly, 100% essential that we have either 'invarient' or
'ftransform' or some other cast-iron guarantee that shader optimization
won't change even the least significant bit of (at a minimum) the matrix
operations and clipping that drive glPosition in the Vertex shader.
That's the minimum acceptable capability - effectively the same
guarantee as ftransform.

Invarience in XYZ is really a barest-minimum requirement.  There are
other nice algorithms that rely on invariance in things like texture
coordinates.   This is so critical that I'd MUCH prefer that we simply
not support hardware that doesn't have a viable GLSL implementation than
to screw up the most vital algorithm in all modern 3D engines for the
sake of (let's be honest) crappy Intel graphics that quite honestly
don't have the horsepower to do 'interesting' stuff anyway.  They could
support OpenGL and shader invariance if they were motivated to do so -
failure to do it is just laziness...well, maybe it's time to punish that.

If we can do no better, let us at least preserve some sort of invariance
directive and provide a testable flag to tell us when it's not actually
guaranteed so that developers who care can fall back on something else
or simply punt on useless hardware.

Just in case anyone out there doubts the importance of this - let me
provide a brief primer on the subject (feel free to skip reading at this
point if you already grok why I'm so upset about this) :

One of the most important modern techniques is to render a super-simple
depth-only pass first - then to render a "beauty pass" second.  The
depth-only pass uses the simplest possible shaders, position-only vertex
attributes and writes only to the Z buffer to save video RAM bandwidth.
The beauty pass has everything turned up to the max - and relies on the
fact that most graphics hardware can skip over Z-fail fragments without
even running the shader.  It means that no matter how complex your
fragment shader, you pays the price to render each screen pixel once.
That's a massively important thing!  If you have occlusion culling (I
guess we don't in WebGL?), then you can make even more savings by
spotting objects that didn't hit any pixels during the depth-only pass -
and not rendering those at all during beauty...also render simple
'proxy' geometry for geometrically complex objects to see if they can be
skipped during beauty.   The net result is a massive speedup for
sophisticated renderers with complex models.

But if there is a difference in even the least significant bit of Z
between those two passes, the image will break up and be unusable.  To
get the benefits of this approach, you need different vertex shaders
between depth-only and beauty passes because you don't pass texture
coordinates, colors, normals, etc to the depth-only shader - and this
change in the source code of the shader will result in different
optimisations happening in the GLSL compiler - which in turn will result
in roundoff errors...which screws up the entire thing.

Multipass lighting also requires perfect-to-the-least-significant-bit
alignment between passes - and that's the way to get sophisticated
multiple light source rendering to happen cheaply.

But increasingly sophisticated algorithms that I'm using these days rely
on invariance in other parts of the vertex shader...the way I apply
bullet holes and blood splatter to geometry in first person shooters -
for example - relies on invariance in texture calculations too.  I'm
sure other developers are finding equally sneaky ways to exploit shaders
that also rely on LSB-perfect multipass.

 -- Steve


Mark Callow wrote:
Removal of ftransform (because it was there for matching fixed function
transformation) yet still wanting o support multi-pass algorithms was
one of the drivers for the addition of the invariant keyword.

Regards

   -Mark


On 09/07/2010 07:26, Kenneth Russell wrote:

On Sat, Jun 26, 2010 at 5:17 PM, Steve Baker <steve@sjbaker.org> wrote:


YIKES!!!

Lack of shader invarience can be a major headache in many common
multipass algorithms.  Without the 'invariant' keyword, we're going to
need something like the 'ftransform' function (which I think was
obsoleted in GLSL 1.4 and GLES 1.0).

Without EITHER 'invariant' OR 'ftransform', some rather important
algorithms become impossible - and that would be really bad news!


Sorry for the long delay in replying.

The removal of the invariant enforcement was recommended by
TransGaming based on its not being implementable on D3D9. Perhaps
someone from TG could comment more on the exact issue and what is
possible to implement. I agree that its removal seems to preclude
multi-pass rendering algorithms in WebGL 1.0.

-Ken


---
     Daniel Koch -+-  daniel@transgaming.com  -+-  1 613.244.1111 x352 
Senior Graphics Architect  -+- TransGaming Inc.  -+- www.transgaming.com
          311 O'Connor St., Suite 300, Ottawa, Ontario, Canada, K2P 2G9

This message is a private communication. It also contains information that is privileged or confidential. If you are not the intended recipient, please do not read, copy or use it, and do not disclose it to others.  Please notify the sender of the delivery error by replying to this message, and then delete it and any attachments from your system. Thank you.