On Wed, Jan 15, 2014 at 10:02 AM, Dean Jackson <firstname.lastname@example.org> wrote:
This goes for all applications/sites - not just WebGL.
If you're using some CSS feature or some DOM api or whatever capability the browser has, which is largely executed in software on the CPU, there's only really faster or slower and for which browser. You're fully able to detect that and appropriately segment it to platform, operating system and browser. Overall performance in WebGL you can infer from that, mobiles are about one or two orders of magnitude slower than desktops, that's fine. But beyond that, the water gets really murky because inside those platform segments, the determining factor isn't the OS or the browser, it's the GPU, and you don't know what it is. So inside even say mobile -> Android -> Chrome Mobile you'll get an order of magnitude difference in performance. And the reason is that some particular specific innocuous functionality that you're using in a particular fashion, and there's many fashions in which you can produce the same or an equivalent functionality, has drastically different performance characteristics.
So it's different for WebGL because your'e a lot closer to the hardware in the API. I'd argue that the space of solutions you can aim for inside just WebGL to make things go smooth or slow on some GPUs, is by itself larger than the entire remainder of the browser APIs taken together. So the reason you're not hearing say, CSS/HTML developers ask for device information, but you're hearing a lot of WebGL developers rally for it, is because it matters a whole lot more to WebGL than it matters to CSS/HTML, because WebGL is run directly on that hardware.