I've been thinking about Chris's suggestion to request a frame with
a particular time-stamp. Why would WebGL applications need something
like that while regular Web apps manage without it? The only
difference I can see is the almost certain increased latency of
making the frame visible, hence the new method.
I think this is the reason. If you already know that you will run some processing after acquiring the frame which you know (from previous measurement) will take say 10 milliseconds before you emit a draw call using that texture, you might adjust the frame time 10 milliseconds into the future.
There's an issue with that in my opinion though. WebGL emits calls for rendering to an off-process rendering queue, and unless you call finish() or issue a command that forces queue synchronization, JS will race ahead emitting calls to the queue and the queue will process fully at some time between now and the call to draw for the next canvas redraw as dicated by browser compositing. In other words, a longwinded way of saying: you might not know exactly when your draw call to display a texture on screen might actually be executed.