[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] GL_RENDERER string needed for performant apps




On 15 Jan 2014, at 9:33 am, Florian Bösch <pyalot@gmail.com> wrote:

GPUs/drivers vary in regard to:
  • How many vertices they push trough
  • How many attributes work fast
  • How many uniforms work fast
  • Vertex buffer layout (packed or 4-byte aligned)
  • If uniform arrays are fast
  • Texel fillrate
  • Coverage sampling
  • AA performance
  • Shader execution, and particular flavors of shader constructucts
  • texture lookup speed
  • lookup speed regarding particular formats
  • rendering to framebuffers (and in particular formats)
  • impact of shader precision on shader execution speed
  • speed of state changes
  • texture upload speed
  • framebuffer readback speed
  • vertex buffer upload speed
  • compressed texture lookup speed
  • etc.
There are many ways in which things can be made to run faster on some GPUs, that can be irrelevant or antagonistic to other GPUs.

WebGL is so low-level that a developer can definitely write their content to perform well given the information above. At the same time, that knowledge can be useful to someone writing any complex Web application, given some understanding of the implementation (most of which are open source). Yet we typically don’t expose that information.

And the details in Google’s proposal really only mentioned evaluating final frame rate based on the card name. They didn’t mention doing anything in Maps code regarding the items listed above, other than disabling AA for some hardware. Maybe Jennifer could give more details?

If the state of the art in Google Maps is to query the GPU id and turn off either AA or WebGL entirely, then it seems the more important problem was that the user experience was degraded *before* they were able to measure performance. Since people don’t update their computer too often, one way to improve this would be to use local storage to remember what the performance was. Then you’d only have to do a real test every so often. [NOTE: I’m completely aware how ridiculous it is for me to make suggestions like this - Google probably spent hundreds of engineer hours trying everything they could]

I’m not trying to make an argument one way or the other here (yet). I’m just asking the questions that will be asked of me if I attempt the very difficult task of exposing user information in Safari. I expect the first comment will be along the lines of “Hi, I noticed you just spent $10000 on a brand new Mac Pro!!”

Dean





On Tue, Jan 14, 2014 at 11:19 PM, Boris Zbarsky <bzbarsky@mit.edu> wrote:
On 1/14/14 5:11 PM, James Darpinian wrote:
The reason the UA string became a problem is because browsers exposed
different APIs for the same features. Developers responded by writing
multiple versions of their code, and then switching based on UA string
(instead of feature detection like they should).

That's the previous incarnation of the UA string problem, 10 years ago.

The current one in the mobile space is quite different.  It's a problem because sites assume all sorts of things like screen size and desired web page layout based on the UA string, not because they're changing which features they use.


Developers will not need to
write multiple versions of their code

Wait.  The whole point of the discussion I've seen so far is that people want to use GL_RENDERER to decide whether to run particular code or somewhat different code or not even try WebGL at all.  Not because the API is different but because the implementation in the graphics hardware is different in a way they care about (performance mostly, sounds like).  Am I just completely misunderstanding the proposal?  If not, then I don't understand your argument here...


so doing the wrong thing and using GL_RENDERER for
feature detection would actually be quite difficult.

The proposal, again if I understand it, is to use GL_RENDERER to decide whether to use particular features, even if they're supported.  In the end, that's what feature detection is for as well: deciding whether to use particular features.  Basically, use of GL_RENDERER comes down to "well, this GPU _says_ it supports this feature, but the support does not meet our quality criteria, so we need to not use it anyway".  Again, unless I'm completely misunderstanding the proposed uses?

-Boris