[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] frame timing and continous scaling

I'm not sure that the number you're looking for is well defined. Consider the situation in Chrome on a multi-core machine, where GPU commands are issued from _javascript_ and executed in parallel in a different process. In the best case there is perfect overlap between JS execution and GPU command execution. A naive sum of these times might yield a result over 16.67 milliseconds, but the application might be able to achieve a steady-state 60 FPS framerate because of the overlap of the current frame's JS and the previous frame's GPU commands.

I'd recommend you ask this question on graphics-dev@chromium.org for Chromium specifically, since the engineers who know the compositor's scheduling algorithms are all on that list. If these numbers are known to the browser already, then a good way to proceed would be to prototype exposing them to _javascript_ and see whether they'd solve the problem in practice.

As an aside, I haven't witnessed requestAnimationFrame being throttled to 30 FPS by UAs. At least Chrome and Firefox on my Mac laptop will both render content with irregular frame times (resulting in the app measuring anywhere between 30 and 60 FPS) if they're GPU bound. I don't understand the details of the algorithm you've described, but perhaps having a window near 60 FPS where the algorithm either maintains the current scene complexity, or periodically tries to increase it, it would avoid the problem of it immediately dropping to 30 FPS because it doesn't know how much faster than 60 FPS the content could potentially render.


On Mon, Jun 1, 2015 at 3:58 AM, Florian Bösch <pyalot@gmail.com> wrote:
I have a usecase where I need to measure how long a frame took (in total JS+GPU+pre/post frame overhead) for dynamic scaling.

What I currently do for that is measure a moving average of interframe times and adjust a scaling factor (gauss-seidel relaxed) to 30 fps.

This method does not work to hit framerates of 60 (or whatever the UA wants to limit to) because it exhibits hysteresis since it doesn't know how far it is above 60fps (and because the UA immediately steps down to 30fps if it can't hit 60).

Here are the things that won't work:
  1. Use performance.now:
    1. intraframe: Does not work because doesn't capture GPU processing end and does not capture pre/post frame overhead
    2. interframe: Does not work because of UA fps limiting
  2. Use timer queries:
    1. intraframe: Does not work because it doesn't capture pre/post frame overhead
    2. interframe: Does not work because of UA fps limiting
  3. Use performance.mark: That's just performance.now with labels
  4. Toggle off FPS limiting: does not work in a production setting (development only measure)
Would it be possible to add an API to navigator.performance?
  • performance.frameTime (this would indicate the true measured frame time)
  • performance.frameTimeLimit (this would indicate what target the UA is limiting to)