On Jul 5, 2012, at 8:28 PM, Mark Callow <email@example.com> wrote:
But it wouldn't be any more work that the texImage2D you'd be doing today without using dynamic textures. I think it would be best to avoid changing the OpenGL API model with this extension.
The only thing you can do is get the image that happens to be the current frame at the moment you get it. It makes for very poor playback behavior, even ignoring all the other overhead that makes today's scheme so bad.
Whether or not images are "triple buffered" is an implementation detail. Video decoders tend to be very non-linear in their behavior. It might take a lot longer to decode an I-frame than a B or P frame. Some decoders buffer several frames, so decoding and expensive frame can happen while the client is consuming several inexpensive frames. The ImageQueues used in OSX and iOS allows the decoder and consumer to be very independent. It allows the decoder to get ahead of the renderer to provide glitch free video playback. But it also allows the system to throw away decoded frames that have become stale without being consumed because the renderer is not able to keep up with the frame rate for some reason. This would result in a reduced frame rate, but the video would still play at the correct rate.
I think our goal should be to allow full frame rate playback of perfectly synced and paced video in a 3D scene. But to do that we need to do better than just letting the video decoder give us whatever frame happens to be ready. The renderer really needs full awareness of media timing, which needs to be communicated in both directions between the media provider and the renderer.