[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_dynamic_texture extension proposal




On Jul 5, 2012, at 8:28 PM, Mark Callow <callow_mark@hicorp.co.jp> wrote:

On 06/07/2012 06:02, Chris Marrin wrote:

There are a couple of inconsistencies:

1) The IDL says "dynamicTextureAcquireImage" and the description says "dynamicTextureAcquireFrame". Same for release.
Fixed. Thanks. The search/replace in my XML editor when I decided to change the name, missed these because they are element attributes.

2) dynamicTextureSetSource passes the texture in the IDL and description, but not in the example.
Also fixed.
Given the nature of OpenGL, seems like you should not pass the texture, but rather use the currently bound texture? Same would be true of all the other API calls.
Yeah. See issues #3, #4 & #9, currently unresolved.

I was thinking that an app. might have dynamic textures on several texture units and doing activeTexture/bindTexture on each in order to call acquireImage was a bit much. This led me to pass <texture> to acquireImage. Once I'd done that, I decided to make setSource direct access as well.
But it wouldn't be any more work that the texImage2D you'd be doing today without using dynamic textures. I think it would be best to avoid changing the OpenGL API model with this extension.

I am also concerned about timestamping. ...

Your proposal doesn't deal with timestamps, so it's possible to get out of sync with the decode rate, causing strobing or other undesirable effects.
How do you handle this today when an HTMLVideoElement is passed to texImage2D?
Poorly :-)

The only thing you can do is get the image that happens to be the current frame at the moment you get it. It makes for very poor playback behavior, even ignoring all the other overhead that makes today's scheme so bad.

I'm not sure that strobing, etc. will really be a problem. Basically the images are triple buffered, as Gregg described upthread, and you will always get the latest available frame. The worst that will happen is a few dropped frames. If the app. is drawing so fast it gets the same video frame more than once in succession, I don't think that is a problem.
Whether or not images are "triple buffered" is an implementation detail. Video decoders tend to be very non-linear in their behavior. It might take a lot longer to decode an I-frame than a B or P frame. Some decoders buffer several frames, so decoding and expensive frame can happen while the client is consuming several inexpensive frames. The ImageQueues used in OSX and iOS allows the decoder and consumer to be very independent. It allows the decoder to get ahead of the renderer to provide glitch free video playback. But it also allows the system to throw away decoded frames that have become stale without being consumed because the renderer is not able to keep up with the frame rate for some reason. This would result in a reduced frame rate, but the video would still play at the correct rate.

I think our goal should be to allow full frame rate playback of perfectly synced and paced video in a 3D scene. But to do that we need to do better than just letting the video decoder give us whatever frame happens to be ready. The renderer really needs full awareness of media timing, which needs to be communicated in both directions between the media provider and the renderer.

-----
~Chris Marrin
cmarrin@apple.com