[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] GL_RENDERER string needed for performant apps



The reason the UA string became a problem is because browsers exposed different APIs for the same features. Developers responded by writing multiple versions of their code, and then switching based on UA string (instead of feature detection like they should). WebGL is different because the API is uniform across vendors. Developers will not need to write multiple versions of their code and the likelihood that people will switch code paths based on GL_RENDERER is much lower.

WebGL does expose differing levels of optional feature support, but there is already a standard feature detection framework that is easier to use than GL_RENDERER. Optional feature support is not well correlated with GPU vendor, so doing the wrong thing and using GL_RENDERER for feature detection would actually be quite difficult.


On Tue, Jan 14, 2014 at 1:33 PM, Boris Zbarsky <bzbarsky@mit.edu> wrote:

On 1/14/14 4:30 PM, Florian Bösch wrote:
GPU vendor and model will allow you to correlate bad performance
behavior to a particular GPU segment, make tweaks to improve things for
that segment.

Sure.  I understand the use cases for this.

I'm just saying concerns about a workflow like above but that makes tweaks that totally break things for that segment, and then the page author never notices are real concerns.  It happens all the time with the UA string.  The problem is less severe if this is basically used as a blacklist; if we have indications that this is more likely than it is with the UA string (which is often treated as a whitelist) I'd love to know what those are.

-Boris


-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------