[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_dynamic_texture extension proposal



On Mon, Jul 23, 2012 at 12:52 AM, Mark Callow <callow_mark@hicorp.co.jp> wrote:
>
> On 19/07/2012 04:46, David Sheets wrote:
>>
>> On Thu, Jul 12, 2012 at 12:54 AM, Mark Callow <callow_mark@hicorp.co.jp>
>> wrote:
>>>
>>> Ahh! I understand now. Would making this interface public be of use to
>>> anything bar WebGL app's?
>>
>> Is
>> <http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/>
>> of use to anything except WebGL apps?
>
> What I meant was of use to anything but WEBGL_dynamic_texture?

The stream producer interface would be useful to any component
(standard or custom) that wishes to supply a dynamic,
time-synchronized texture stream to a stream consumer such as a GPU
shader pipeline or pool of workers (see shared resource thread for the
cross-context zero-copy analog to cross-media zero-copy). Presently,
the temporal dimension is locked to the audio track of the AV decoder
and ignores the frame repaint of other dynamic elements. As the
producer interface has not yet been defined, standard composable video
frame pipeline stages are not yet possible.

>> The browser has a handful of metrics useful to the page for estimating
>> time-to-render latency. The page ultimately knows best, though,
>> especially with more elaborate texture pipelines and shader program
>> changes. Is passing a time offset in milliseconds to Acquire
>> insufficient?
>>
>> Exposing machine-dependent time-to-render latency heuristics seems
>> like a separate interface (host profile) from dynamic texture binding.
>
> Do the browsers currently expose these metrics to JS apps? If so, where is
> the API documented?

To my knowledge, these metrics are presently unavailable to scripts
except through per-page-load benchmarking. The rendering subsystem's
latency vector should be exposed through a separate WebGL extension or
a core API publishing optional host profile metrics.

>> "webglDynamicTextureOnAcquire(WebGLRenderingContext, WebGLTexture)" is
>> probably not going to cause name collisions or other heartache. Why
>> not ask for forgiveness rather than permission?
>
> I've done something like this in the latest draft I've added
> dynamicTexture{Set,Get}ConsumerLatencyUsec  methods that should ideally be
> on the HTMLVideoElement or HTMLMediaElement. Currently they take an
> HTMLVideoElement as a parameter.

I am stacking dynamic textures 4 deep sourced from a single video
element. V => A => B => C where B and C may depend on any subset of
previous producers in the chain.

In this use case, HTMLVideoElement V has <=3 different consumer
latencies, WebGLRenderingContext A has <=2 consumer latencies, etc.

Should 3 separate HTMLVideoElements be created with the same video
source? How do I keep them in sync?

>> If we have different sampler types (RGB and YUV), we have different
>> sampler types. The present 'samplerExternalOES' type conflates two
>> separate aspects of external textures: lack of mipmap/LOD support and
>> colorspace conversion.
>>
>> Perhaps "samplerExternalYUV" should be introduced if you want to
>> expose YUV colorspace to shaders? A function 'convertColorspaceRGB'
>> could be provided to produce 'samplerExternalRGB' from
>> 'samplerExternalYUV' or 'samplerExternalRGB' (or 'sampler2D' from
>> 'sampler2D' (identity)).
>>
>> Consider: what if I have two videos with two different colorspaces
>> that I alternately bind to the same sampler? What if an author wishes
>> to operate on the raw YUV data (or YIQ, HSL, HSV, xvYCC, YPbPr...)? If
>> HTMLVideoElement decodes into a number of different colorspaces and
>> the conversion functions are pushed into user shaders, the conversion
>> permutation issue is still present if the sampler types are not
>> disambiguated and different HTMLVideoElement source media are bound.
>
> This is deliberate.
>
> We don't want to expose YUV colorspace to shaders because the fastest way to
> get video data to textures is hardware dependent. Web applications would not
> be able to specify what they get. On some platforms it might not even be
> possible to access the YUV data. Applications would become responsible for
> querying the colorspace of the external texture and providing the correct
> shader which is not a trivial exercise. It is better for implementers to do
> it once that every author having to do it.

I agree with you with one caveat: the implementors must not limit the
functionality of this interface due to their particular implementation
of colorspace conversion (e.g. dynamically modifying page shaders
which normalize colorspace just-in-time and restricting the interface
to avoid shader permutation blow-up or shader recompile rendering
pauses).

My shader uses 16 different external dynamic textures (to create an
animated video effect wall) which are sourced from user-supplied
resources encoded in a variety of formats and changed dynamically as
the user selects different page elements.

How does each implementor produce the appropriate shader permutation
at the appropriate time? To my mind, the texture data should always be
colorspace-normalized RGBA entering the author's shader. If this is
the case, there is no limitation on incoming encoding and the concerns
over unification of single LOD TEXTURE_2D and TEXTURE_EXTERNAL are
unfounded. Any source of RGBA (e.g. DOM fragments for privileged
pages) should be able to become a dynamic texture producer.

If the implementors dynamically modify my shaders with branches for
colorspace conversions or recompile my shaders with each permutation
of incoming colorspace, I would like to be able to turn this off and
perform the conversion calls myself having knowledge of which
permutations will be needed and when.

> Also the raw YUV data is not currently exposed to JS applications and I
> don't think WebGL should expose it.
>
> Lastly, if YUV were exposed in the shaders, people would push to YUV as
> format and internal format for textures which is something that is not
> possible with current hardware.

YUV doesn't fit in a vec3? If the underlying colorspace of the data is
not exposed, it should also not effect the formats, types, encodings,
or availability of generic dynamic texture producers. This may be a
burden for implementors as you and Florian have discussed earlier this
thread.

I would be perfectly happy with either pole of texture data
availability: full colorspace exposure with a proliferation of sampler
types and an overloaded RGBA conversion function (switchable with
sampler type renaming macros) OR absolutely homogeneous colorspace
from every source without restriction.

Sampling does not need to be available for non-RGBA colorspaces
(samplerExternalYUV would be abstract and only consumed by
conversion). Colorspace conversion branching can be done at
compile-time using overloaded convertColorspaceRGBA(...) and sampler
type -> sampler type macro renames. With this design, the colorspace
sampler type names do not have to be standard -- simply available to
the page and overloaded in the GLSL conversion functions. This gives
the page author the most control and performance and is a superset of
the presently proposed functionality. The author is now free to choose
at what time the sampler conversion should be specialized and may
anticipate format changes in advance of their occurrence and compile
the appropriate shader before it is needed.

>> Is this (lack of) copy functionally observable? Why is a TEXTURE_2D
>> not equivalent to a paused video?
>
> A separate sampler type was chosen so as to enable implementations that may
> wish to insert code that does run-time selection of a shader branch to
> handle an external texture format without burdening all texture accesses
> with that extra code.

But individual colorspace branches must still burden every 'external'
texture access? If I can specialize the sampler type at load time, why
penalize me with a run-time conditional? convertColorspaceRGBA may
still contain a colorspace selection branch if passed the run-time
polymorphic 'samplerExternalOES'.

>> Is conversion of all specifications into a standard hypertext format
>> on Khronos' agenda?
>
> I don't think anyone has much enthusiasm for converting the roughly 700 page
> OpenGL specification from TeX to whatever you mean by "standard hypertext
> format." We did once plan to move all the specifications to DocBook format
> but the idea did not gain traction. From my own experience of using DocBook
> for a relatively simple document I can understand why.  It can be an
> absolute nightmare to change even seemingly simple things.

TeX is a fine source language. DocBook is a fine target language. No
manual conversion is necessary.

Many HTML-of-TeX convertors exist not the least of which is HeVeA
<http://hevea.inria.fr/>.

I'm sure someone in our community would contribute a build system for
converting the present TeX source into (X)HTML provided the spec
source was made public.

Will Khronos publish the TeX source for the specification of the Open Standard?

Are OpenGL ES extensions drafted in text/plain as they are published?
I may or may not already have a parser for this format.

Au revoir,

David

> Regards
>
> -Mark
>
> --
> 注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合が有ります。正式なメール受信者では無い場合はメール複製、
> 再配信または情報の使用を固く禁じております。エラー、手違いでこのメールを受け取られましたら削除を行い配信者にご連絡をお願いいたし ます。
>
> NOTE: This electronic mail message may contain confidential and privileged
> information from HI Corporation. If you are not the intended recipient, any
> disclosure, photocopying, distribution or use of the contents of the
> received information is prohibited. If you have received this e-mail in
> error, please notify the sender immediately and permanently delete this
> message and all related copies.

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------