On Jul 20, 2012, at 3:29 AM, Mark Callow <firstname.lastname@example.org> wrote:
"regular Web Apps" typically don't play with video. They create a video element and then play, pause, change the play head, play at different rates, etc. The video rate control, playback and rendering are all handled by the same native driver. And in fact on OSX and iOS that native driver does use timestamps to control when frames appear.
WebGL needs to fetch a frame from the video provider and then render it. Since there is a disconnect between the two, there has to some way to control which of possibly several available frames should be used. Your dynamicTextureSetConsumerLatencyUsec() does this somewhat. But that method makes my stomach hurt. It tries to do exactly what I'm talking about but in a very indirect and (to me) confusing way. That call is specifying the difference between when the call to acquire is made and the frame will hit the display. What's the difference between that and asking for a frame for a given time because that's when you determined it will hit the display?
And by doing it in an indirect way like you've specified you're making it harder to do the other thing I mentioned before. In the future, if I want to work on two frames at a time, I will have to ask for frames for two separate timestamps. If the timestamp is merely another parameter to dynamicTextureAcquireImage() I can do that. If I were to try to do it with dynamicTextureSetConsumerLatencyUsec() I would have to fool the system into giving me the frame I want by asking a frame at 33 ms more latency than the previous frame.
It just seems like adding another API call is not nearly as good as just adding a timestamp param to dynamicTextureAcquireImage(). It could even be an optional parameter.