My only misgiving on using #extension rather than #pragma is that it seems incompatible with the idea that only calling getExtension() would be enough to enable the functionality. If #extension was used, I suppose it would mean that adding #extension WEBGL_debug_shader_precision : enable; would also be required in all shaders to enable the emulation. But maybe that would be better for consistency.
From: Florian Bösch <email@example.com>
Sent: Thursday, November 6, 2014 4:05 PM
To: Olli Etuaho
Subject: Re: [Public WebGL] WEBGL_debug_shader_precision extension proposalOn Thu, Nov 6, 2014 at 2:56 PM, Olli Etuaho <firstname.lastname@example.org> wrote:
WEBGL_shader_ast is a neat idea, but that would require a large spec if it was a WebGL extension, and it would add possibly unwanted constraints on how the parsing infrastructure in browsers should work.Maybe vendor folks could chime in here. I'd really like something like it, how feasible is it?Implementing that as a JS library might actually be less effort. In this case the additions to the API are minimal. Also, one important goal here is to make using the emulation as easy as possible. To that end, just a few added lines of code is much better than integrating a big library to a JS app
That's true, but it's only true if you don't have WEBGL_shader_ast :). If you had it, it'd be a few dozen lines of JS you could just drop in and have it modify the WebGLContext.prototype.shaderSource.I come up with an alternative on how toggling the emulation on a shader-by-shader basis should work, by the way - using a "#pragma webgl_disable_precision_emulation;" directive in shaders could be simpler to both understand and implement. Thoughts on this?
Since it is an extension how about "#extension WEBGL_debug_shader_precision : disable" ?