[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] frame timing and continous scaling



Let me state the problem quite simply: How much stuff can we do in JS/GPU land *before* we fall out of a consistent and regular composition synced framerate.

Definitions
  • JS-land: everything that blocks production of the next frame (i.e. everything on the main thread)
  • GPU-land: everything that blocks compositing of that frame (i.e. the compositor has to wait till the GPUs done)
  • before: this means before a degradation (to the user) becomes evident (i.e. framerate dropping)
  • consistent and regular composition synced: this means that everytime the UA wants to composit a frame, a frame has been produced and is ready from the WebGL context
Related (but mostly solved) problems
  • Measuring JS execution time: This is a good measure to see how much time you spend per frame in the JS main thread JS execution (and related utility code like DOM)
  • Measuring GPU execution time (timer queries): This is a good measure to see how much time you spend on the GPU doing which task
  • Measuring interframe times: This is a good measure to identify if you already do have a problem
Problem Relevance

It might not be clear as to why this is an important problem to solve for everybody. Therefore I'll paste a couple of links that relate to the topic:
In a nutshell, this is about that UAs (should) try to hit a native (display) framerate consistently for a page. Everything that page does that moves (video playback, GIF cycling, JS animations of html, WebGL, Canvas 2D, etc.) depends on stutter free (and fast) turnover of that cycle. Native applications do have some advantages in that regard (they can claim processing/GPU time exclusively, toggle off vsync etc. and don't nearly as many things as a UA). If one WebGL context or the JS main thread does something to interrupt that cycle, everything on the page suffers (including that WebGL context).

Let's suppose you're trying to nail 60FPS (16.66ms/frame). And for some reason (JS or GPU execution time) the cycle takes say 18ms. Depending on UA implementation, a variety of problems now ensues (framerate drop down to 30fps/jittering/stuttering/frame-tearing/etc.)


On Wed, Jun 3, 2015 at 4:44 AM, Kenneth Russell <kbr@google.com> wrote:
I'm not sure that the number you're looking for is well defined. Consider the situation in Chrome on a multi-core machine, where GPU commands are issued from _javascript_ and executed in parallel in a different process. In the best case there is perfect overlap between JS execution and GPU command execution. A naive sum of these times might yield a result over 16.67 milliseconds, but the application might be able to achieve a steady-state 60 FPS framerate because of the overlap of the current frame's JS and the previous frame's GPU commands.
I don't think this matters much because it's not about how much time you spend where, but how fast a complete frame for display in the tab could have composited, starting the count before anything for that frame is done, and stopping after it's *known* to be completely submitted.
 
As an aside, I haven't witnessed requestAnimationFrame being throttled to 30 FPS by UAs. At least Chrome and Firefox on my Mac laptop will both render content with irregular frame times (resulting in the app measuring anywhere between 30 and 60 FPS) if they're GPU bound.
I can illustrate the issue when I get around to plot some frame-time charts.
 
I don't understand the details of the algorithm you've described, but perhaps having a window near 60 FPS where the algorithm either maintains the current scene complexity, or periodically tries to increase it, it would avoid the problem of it immediately dropping to 30 FPS because it doesn't know how much faster than 60 FPS the content could potentially render.
Maintaining 30fps is actually better than maintaining 16,32,32,16,32,16,32,16,32,16,32 (microstuttering).