[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_debug_shader_precision extension proposal



I was under the impression that if you request mediump/lowp in a shader, that you can get it, correct me if that's a wrong assumption.

If that is not the case, then an extension to ensure to get mediump/lowp behavior is very much welcome.

There's a few suggestions I'd have:

On Thu, Nov 6, 2014 at 11:55 AM, Olli Etuaho <oetuaho@nvidia.com> wrote:

Hi all,


I've been prototyping a testing tool that enables emulating mediump and lowp precision computations on hardware which doesn't support them natively. It helps to reveal tons of shader bugs that can go undetected otherwise. I've submitted a proposal for an extension that would expose this functionality:


https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_debug_shader_precision/


Points in favor of exposing this functionality as a WebGL extension:

-There are a lot of app bugs in this area, that are likely caused in part by a lack of easy ways to test for them. Some way for addressing that is clearly needed. In my tests, the emulation as specified has revealed a large majority precision-related bugs.

-The implementation fits into the existing architecture browsers have for translating shaders.

-As opposed to only exposing this as a part of developer tools in the browser UI, it enables automated testing for app developers, and fully web-based development environments could also take advantage of it.


Points against exposing this functionality as a WebGL extension:

-This could in principle be just a _javascript_ library, which would make it available equally on all browsers - but the easiest way to implement it would be to emscripten-compile the shader translator part of ANGLE. If the code is going to be in ANGLE regardless, it seems sensible to expose it through the browsers using it.

-The emulation is not guaranteed to exactly match how a specific device implements floating point computation, so it does not completely remove the need to test shaders on a variety of hardware. The current proposal also avoids requiring complex code transformations, which makes the emulation imperfect.


To me, it seems clear that the positives strongly outweigh the negatives. Open questions still include:


1) Should the extension also emulate mediump and lowp integers?

2) Should an effort be made to get rid of the remaining restrictions that concern unary operators and compound assignments where the l-value _expression_ has side effects? It is possible to make the following kind of transformations:

- x++ into a function call to "float postincrement(inout float x) { float y = emulate(x); x = emulate(y + 1.0); return y; }".

- x op= y into a function call to "float compoundop(inout float x, in float y) { x = emulate(emulate(x) op emulate(y)); return x; }" even in the case where evaluating the original l-value _expression_ x has side effects.

3) Should the extension have more configurability, for example:

3a) The ability to change the number of bits in the emulated formats. This would be feasible from the minimum requirements of mediump to almost 32-bit IEEE precision. Personally I don't think the added benefits from this are worth the cost of implementing this.

3b) The ability to toggle emulation on and off for each shader compiled in the context, or toggle emulation on and off for mediump and lowp individually. I don't have a strong opinion on this either way.
3c) The ability to set the level of emulation, from something that emulates only the most error-prone operations to full emulation? This would trade accuracy for performance.


Comments?

-

Regards, Olli Etuaho, NVIDIA