[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] GL_RENDERER string needed for performant apps

On Wed, Jan 15, 2014 at 11:24 AM, Steven Wittens <steven.wittens@gmail.com> wrote:
- Worse, since you’re capped at 60fps and v-synced, the only way to get a remotely decent reading is to stress test. On a GPU accelerated UI, that means blocking not just the browser tab, but slowing the entire desktop down to a crawl. The experience is terrible, and you don’t even get a reliable reading if other things are going on.

Before this gets lost, measuring performance outside the 60fps envelope is also an issue.

In theory you could do this (somewhat) with gl.finish(). But some browser vendors have seen fit to disable gl.finish() (that's silly, you can just substitute gl.finish() with a 1x1px framebuffer readback or any other blocking function). But even if you had gl.finish(), it's not a good idea to use it, because it slows everything right down (maybe that is why they disabled it).

It'd be useful to have an idea how much performance the webgl app is munching off the GPU, without having to block the GPU, and without having to disable vsync in the browser flags (you obviously can't do that for all your visitors). It's useful because even if all your visitors get 60fps (I wish), it'd still be nice if you could know if a change you did reduced their GPU load from 80% to 20%. It's my understanding that GL timers could be used for this (barring that they seem to be bugged on some platforms or whatever).