[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_dynamic_texture extension proposal



Below is my reply to Acorn

Regards

-Mark

--------------------------------------------------------------------------------------------------------------

Hi Acorn,

Thanks for the feedback. You sent it to 3dweb. Is there a reason? Can I forward it and this reply to public-webgl?

Comments interleaved below.

On 13/07/2012 05:38, Acorn Pooley wrote:
Feedback on the WEBGL_dynamic_texture extension:

...

An example is the use of a texture that is acquired. With EGLImage semantics
(and current WEBGL_dynamic_texture semantics) the texture can be modified
asynchronously at any time while it is being used (this results in
implementation dependent behavior which will vary from one implementation to
another). With EGLStream semantics the acquired image can never be modified
while it is acquired.
The WEBGL_dynamic_texture spec. explicitly says that the "acquired" image will not change until release is called, unless I'm badly misunderstanding my own language. In other words I believe I'm already overlaying EGLStream semantics.
Another example is the use of a dynamic texture after
dynamicTextureReleaseImage has been called. EGLStream semantics require that a
TEXTURE_EXTERNAL texture that has no image acuired be treated as incomplete --
sampling it always returns black (0,0,0,1) pixels. The semantics of EGLImage
which are in the current extension wording are basically "undefined results"
which will make life difficult for developers and make content appear
differently on different implementations.
Gregg raised this issue. I'd be happy to change so that black pixels are returned after release but I am not sure how that can be implemented on a GLES implementation that supports only EGLImage. Is there a way? I suppose the WebGL implementation could just unbind the texture from TEXTURE_EXTERNAL.
Another area that much thought has gone into is how TEXTURE_EXTERNAL should
work when the source is a video which is playing at a different rate than the
WebGL rendering loop. EGLStream addresses this by stating that "Acquire" gets
the image which should be displayed next. ... I think it is a good idea to require it to
work this way. Otherwise there is too much flexibility and different WebGL
implementations may act differently.
I'll be happy to specify this semantic which is what I suspect browsers use when playing <video> elements.. But is this enough to really sync to the audio? How are the different delays for rendering audio and video managed? The delay rendering via WebGL is likely to be different that when the browser renders the video directly.

Chris Marrin suggested passing a timestamp of the desired frame to acquireFrame but it is not clear to me that that is sufficient to solve the problem. Please see his messages in the thread in public-webgl.

How are the delays managed with EGLStream?

(There is also the GL_KHR_stream_fifo extension which allows the app to opt in
to different semantics (never drop frames) but that is probably less useful --
you can decide if you need that option or not.)
I wasn't aware of this extension. Thanks.
SPECIFIC FEEDBACK:
My reaction to most of this is covered above.
...

- In section 5.13.8.1 Dynamic textures: It says
If a texture object to which a dynamic source is bound is bound to a
texture target other than TEXTURE_EXTERNAL the dynamic source will be
ignored. Data will be sampled from the texture object's regular data
store.
I think it is a mistake to allow the same texture to be bound at one time
to a TEXTURE_EXTERNAL target and at another time to a different target. In
GLES2 this is an error (INVALID_OPERATION). Once a texture is bound to a
TEXTURE_EXTERNAL target it may never be bound to any other target. Once a
texture is bound to a target other than TEXTURE_EXTERNAL it may never
be bound to the TEXTURE_EXTERNAL target. (Maybe it is not possible to have
the same semantics in WebGL, but if it is possible I think this is
preferable.)
So you are really making two different types of texture depending on which target you bind the name to, to create the object. Why do you think it is a mistake to allow different bindings at different times?

If we go with this semantic, I wonder if we should introduce a new texture type in WebGL to make this semantic clear. The name of the current WebGL method is createTexture() although it is specified to only gen the texture name.
- Note that the GL_OES_EGL_image_external extension describes how the
GL_TEXTURE_EXTERNAL_KHR texture target works with EGLImages. It is the
GL_NV_EGL_stream_consumer_external extension which describes how
GL_TEXTURE_EXTERNAL_KHR works with EGLStreams. It might make more sense to
mention GL_NV_EGL_stream_consumer_external instead of (or in addition to) the
GL_OES_EGL_image_external extension. The two extensions are very similar so
this may not really matter.

Actually it is the EGL_KHR_stream_consumer_gltexture which really describes
how EGLStreams work with TEXTURE_EXTERNAL. It might be worth referring to
this extension and/or borrowing some of the language about Acquire/Release
functions.
I'll have a look at these extensions.

Thanks for the great feedback.

Regards

-Mark

--