[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_debug_shader_precision extension proposal



If I understand correctly your arguments boil down to (a) You've already written the code and (b) It's based on ANGLE which is in C++ therefore you can shove it into Chrome. And maybe Firefox and Safari can use it too. Not IE. You're still requiring all browser teams to invest time into this integration. They're stuck maintaining it forever. Of course they can choose to never support it since it's an extension but then you aren't really achieving your goal in that case.

I still don't see the point of making it an extension. Like Tibor said, extensions should be for core functionality. 

> The standard has gone on for years without any tools to address this fairly large problem.

While tools to help with this are great the standard doesn't need to address this. A library will address this better for all the reasons mentioned before. I'm really not seeing the argument that it should be an extension except you've already written some code. It seems like it should come down it if it's the right thing to do. Maybe it is but I haven't seen a good argument for why.  Making it an extension won't make it any more likely to be used. In fact the most likely way for something like this to be used would be to add it to the WebGL Inspector or something similar so that the user doesn't have to change any code, they can just visit any page and click some button that's outside the page.  You could also build it into a browser's devtools, but that again won't help because it will only be available in those browsers that implement it.

It seems like if you really want to hit the most devs you'd make it a library because then any dev on any browser could use it.




On Fri, Nov 14, 2014 at 7:50 AM, Olli Etuaho <oetuaho@nvidia.com> wrote:
I don't think that being able to set a ton of settings is actually that useful. If a shader runs correctly on a configuration where the floating point values have the minimum number of bits allowed by the spec and where subnormal numbers are flushed to zero, it is very likely to also run correctly on a configuration with a slightly higher number of bits and where subnormal numbers are not flushed to zero. So an explosion of the amount of test configurations can be avoided by only testing in an environment which implements the minimums.

I agree that not simulating the internals of built-in functions is an issue in the extension as it is currently specified, and I could still put some work into evaluating the significance of that, but we'll also have to remember that perfect is the enemy of good enough. The standard has gone on for years without any tools to address this fairly large problem. Now, when a solution comes along that would help with more than 90% of the issues, it seems like an odd response to me to say that this won't do and something better is needed. The working group agreed in a recent meeting that the extension is potentially a good way to expose this functionality for pragmatic reasons, even if it's also going to become a part of browser developer tools and even if ideally a JS library would be preferable.

That being said, I'm interested in hearing more about the debugging tool you're working on, it sounds like it can help developers fix related problems as well. Do you have it up somewhere on the web? Feel free to reply outside the thread if you think that's appropriate.


-Olli


From: Tibor Ouden, den <tibordenouden@gmail.com>
Sent: Friday, November 14, 2014 3:53 PM
To: Olli Etuaho
Cc: Gregg Tavares; Jeff Gilbert; Mark Callow; Florian Bösch; Kenneth Russell; public webgl

Subject: Re: [Public WebGL] WEBGL_debug_shader_precision extension proposal
 
To easily change the precision to inspect the effect on the computation is useful debug functionality.

But a next request could be to have control over the exact count of bits in the significand.
And then I would like to specify the amount of guard bits used.
And then for sqrt() I would like to be able to specify a certain look up table algorithm because the gpu
in device xyz uses that.
Change the treatment of subnormal numbers ?
Emulate the 'exact' numerical behaviour of gpu xyz ? 
... etc.

I think this is part of a larger set of debug features related to numerical precision issues. 
I think extensions should be related to the functionality of WebGL (running code from a web page on the gpu)

Everything related to enhancing the implementation process of a polished WebGL application is 'tool' functionality
and should be done in a library if possible in my opinion. It allows for more flexibility and prevents the WebGL
spec from increasing.
If you allow this functionality to be an extension where do you draw the line ?

I have nothing against a browser vendor implementing a full blown shader debugger in their browser.
But that is not part of the WebGL spec.

While attempting to implement a physics engine on the gpu (http://www.borbitsoft.com/
using WebGL, I encountered some interesting numerical issues.

Due to the experiences with that project I started working on a shader debugger in _javascript_ which allows
the user to inspect every single variable by rewriting the shader on the fly and warn against common mistakes (at least for me :-) ).
Specifying reduced precision is an interesting feature, as is selecting the 'exact' numerical
profile of target device xyz.
Also warning against constructs like vec3(float, vec3) (silent drop of last component of 2nd argument)
Not setting gl_PointSize when rendering points
...

This done in in-between-projects-time so will take a while.

Cheers,

Tibor


2014-11-14 11:10 GMT+01:00 Olli Etuaho <oetuaho@nvidia.com>:

I agree that having a tool for detecting other kinds of undefined shader behavior would be useful. There's multiple kinds:

-Math function limitations (asin, acos, atan, pow, log, log2, sqrt, inversesqrt, clamp, smoothstep all have these)

-Accessing textures inside non-uniform control flow

-Reads from uninitialized shader variables and missing return values

-Not writing to gl_Position

-Some details of integer computations


But even with all of these possibilities of undefined behavior, most of the errors I've seen are still definitely related to precision. To put some data behind my claims that they're extremely widespread: they're in some three.js examples, some Blend4Web demos, all except the latest version of Babylon.js, turbulenz engine, some other proprietary WebGL content, and as a guesstimate in half of recent shader demos in glslsandbox and shadertoy. Of course you won't ever see the precision issues unless testing on a variety of mobile hardware.


If someone is willing to put in the work to implement a more versatile shader debugging library, that would be useful, but I think the precision emulation can still stand on its own. Having it as an extension in browsers doesn't prevent building more things on top of it.


-Olli


From: Gregg Tavares <khronos@greggman.com>
Sent: Friday, November 14, 2014 3:31 AM
To: Olli Etuaho
Cc: Jeff Gilbert; Gregg Tavares; Mark Callow; Florian Bösch; Kenneth Russell; public webgl

Subject: Re: [Public WebGL] WEBGL_debug_shader_precision extension proposal
 
As an example of something that I'd want added to this and an argument for making it a library,

I'd like to see something that re-wrote the shaders to find all the undefined behavior. For example I just tried to use this shader on iOS


It turns out it's calling pow(x, y) with x < 0 which is undefined according the spec and therefore doesn't work on all GPUs.

That seems like something a shader debugging re-writing library could easily do, maybe by rewriting pow to some kind of _expression_ that returns a different color by mod(gl_FragCoord, 2) or something such that the results hopefully stick out. Personally I've found these errors far more common than precision errors but that might just be my experience.

It seems like other re-writes for debugging would be useful too. You could probably implement shader debugger. But if you make it an extension no one else can't augment it.

Also not every browser uses ANGLE AFAIK.

-gregg

On Wed, Nov 12, 2014 at 3:57 AM, Olli Etuaho <oetuaho@nvidia.com> wrote:
I do see the upsides of having this as a library, but as I stated before, the best way to implement said library would be to run ANGLE's shader compiler through emscripten. This is possible to do whether the extension is accepted or not, but from a purely technical perspective, it's much more work and overhead. As counterpoints to Gregg's message:

-The specification is fairly small, so making it exact is not very hard.
-The specification can still spend a while as a proposal/draft, so it can be more freely edited and the issues can be worked out.
-I'll be doing the implementation work in ANGLE. ANGLE maintainers already expressed that they'd likely be willing to accept the patches. After that, it's fairly trivial to expose the extension. I already have a working prototype for Chromium. So I hope it will require only a minimal amount of work from anyone else.

-I can't foresee any pressing need to extend and update the extension. The extension should be compatible with both ESSL 1.00 and ESSL 3.00 already in its current form. The need to do large updates to it would arise only if WebGL switched to a drastically different shading language.
-This is an extension for testing, so not having support in every browser is more of a slight inconvenience rather than something that would greatly hinder its usefulness.
-If it was a library, a spec like this would still be beneficial, so that what it does would be clear to the user.

I also can't stress enough how widespread precision-related shader bugs are. I've seen them frequently in content developed by professional and hobbyist developers alike, every once in a while even in content that was specifically written with mobile devices in mind. If you're still not convinced, I'll have to look at other alternatives besides the extension, but something needs to be done, and I think tooling like this is a big part of the answer.

-Olli
________________________________________
From: Jeff Gilbert <jgilbert@mozilla.com>
Sent: Wednesday, November 12, 2014 4:26 AM
To: Gregg Tavares
Cc: Mark Callow; Florian Bösch; Olli Etuaho; Kenneth Russell; public webgl
Subject: Re: [Public WebGL] WEBGL_debug_shader_precision extension proposal

I agree with Gregg.

I will add that if it's something that we feel is important enough as a working group, we could canonize the library and maintain it as part of our github repo.

-Jeff

----- Original Message -----
From: "Gregg Tavares" <khronos@greggman.com>
To: "Mark Callow" <khronos@callow.im>
Cc: "Florian Bösch" <pyalot@gmail.com>, "Jeff Gilbert" <jgilbert@mozilla.com>, "Olli Etuaho" <oetuaho@nvidia.com>, "Kenneth Russell" <kbr@google.com>, "public webgl" <public_webgl@khronos.org>
Sent: Tuesday, November 11, 2014 6:16:19 PM
Subject: Re: [Public WebGL] WEBGL_debug_shader_precision extension proposal

If this works just fine as a _javascript_ library why add it as an extension?

As an extension what it does has to be specifically specified.
As an extension it can't be upgraded without making and proposing a new
extension.
As an extension it passes all work to the browser vendors who each need to
implement it

As a library it can be updated and extended whenever
As a library it only needs one implementation and everyone can use it
As a library it can do whatever it wants, no spec needed

>From the discussion above it doesn't seems like it needs to be an
extension. It doesn't seem like there is some specific OpenGL functionality
that needs to be exposed to make it possible. It also doesn't sound like a
speed issue given that the resulting shaders are up to 10x slower.

Also as a library it should be easy to patch it the same way the WebGL
Inspector patches itself in or various other libraries that patch things
like WebGLRenderingContext.prototype.compileShader




On Tue, Nov 11, 2014 at 2:23 PM, Mark Callow <khronos@callow.im> wrote:

>
>
> > On Nov 12, 2014, at 7:19 AM, Florian Bösch <pyalot@gmail.com> wrote:
> >
> > What's wrong with it is that it does not allow you to isolate an issue
> with any of your shader code buried in use somewhere in your application.
> >
>
> You have to find either the buried shader code or the buried call to
> compileShader for that shader. These efforts may or may not be much
> different, depending on the structure of your code. I would not object to
> supporting both an API toggle and a pragma, getting the best of both worlds.
>
> Regards
>
>     -Mark
>
>
> -----------------------------------------------------------
> You are currently subscribed to public_webgl@khronos.org.
> To unsubscribe, send an email to majordomo@khronos.org with
> the following command in the body of your email:
> unsubscribe public_webgl
> -----------------------------------------------------------
>
>