[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_dynamic_texture extension proposal




On Jul 20, 2012, at 3:29 AM, Mark Callow <callow_mark@hicorp.co.jp> wrote:

 On 13/07/2012 19:42, Mark Callow wrote:

Before I go ahead and change the draft I want to know if people comfortable having an extension that mirrors a non-Khronos OpenGL ES extension? As I said it
Since I heard no objections, I've just committed a highly revised draft based with the TEXTURE_EXTERNAL parts mirroring GL_NV_EGL_stream_consumer_external. You can find it at
http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/
The commands for connecting sources and acquiring and releasing image frames now follow the semantics of EGLStream. I've also added a dynamicTextureSetConsumerLatencyUsec(HTMLVideoElement) method that I think is needed to help with audio synchronization.

I've been thinking about Chris's suggestion to request a frame with a particular time-stamp. Why would WebGL applications need something like that while regular Web apps manage without it? The only difference I can see is the almost certain increased latency of making the frame visible, hence the new method.
"regular Web Apps" typically don't play with video. They create a video element and then play, pause, change the play head, play at different rates, etc. The video rate control, playback and rendering are all handled by the same native driver. And in fact on OSX and iOS that native driver does use timestamps to control when frames appear.

WebGL needs to fetch a frame from the video provider and then render it. Since there is a disconnect between the two, there has to some way to control which of possibly several available frames should be used. Your dynamicTextureSetConsumerLatencyUsec() does this somewhat. But that method makes my stomach hurt. It tries to do exactly what I'm talking about but in a very indirect and (to me) confusing way. That call is specifying the difference between when the call to acquire is made and the frame will hit the display. What's the difference between that and asking for a frame for a given time because that's when you determined it will hit the display? 

And by doing it in an indirect way like you've specified you're making it harder to do the other thing I mentioned before. In the future, if I want to work on two frames at a time, I will have to ask for frames for two separate timestamps. If the timestamp is merely another parameter to dynamicTextureAcquireImage() I can do that. If I were to try to do it with dynamicTextureSetConsumerLatencyUsec() I would have to fool the system into giving me the frame I want by asking a frame at 33 ms more latency than the previous frame.

It just seems like adding another API call is not nearly as good as just adding a timestamp param to dynamicTextureAcquireImage(). It could even be an optional parameter.

Regards

    -Mark

-- 

-----
~Chris Marrin
cmarrin@apple.com