[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_dynamic_texture extension proposal



On 26/07/2012 10:51, David Sheets wrote:
The stream producer interface would be useful to any component
(standard or custom) that wishes to supply a dynamic,
time-synchronized texture stream to a stream consumer ...
In essence this means this WebGL extension will include extensions for other HTML elements. If nobody here objects to it doing so, then in the next draft I'll add a stream producer interface which will be used to acquire image frames, etc.
The browser has a handful of metrics useful to the page for estimating
time-to-render latency. ...
Do the browsers currently expose these metrics to JS apps? If so, where is
the API documented?
To my knowledge, these metrics are presently unavailable to scripts
except through per-page-load benchmarking.
Is there any documentation anywhere about these metrics? I'm not actually sure they are useful in this case but it would be helpful to know what they are.

Separately but related I was already aware of the issue Florian pointed out that the WebGL application does not know when the final page image will appear on the screen. I intend to look at previous GL extensions that have dealt with similar issues and propose a solution.
I am stacking dynamic textures 4 deep sourced from a single video
element. V => A => B => C where B and C may depend on any subset of
previous producers in the chain.

In this use case, HTMLVideoElement V has <=3 different consumer
latencies, WebGLRenderingContext A has <=2 consumer latencies, etc.

Should 3 separate HTMLVideoElements be created with the same video
source? How do I keep them in sync?
As best I can understand your scenario you are composing several video frames to produce a final image. Keeping in mind that the purpose of setting the consumer latency is so the producer can keep the audio synchronized, the answer to your question is to set the latency to the time from acquiring the first image frame to when the final composited image is displayed.
I agree with you with one caveat: the implementors must not limit the
functionality of this interface due to their particular implementation
of colorspace conversion (e.g. dynamically modifying page shaders
which normalize colorspace just-in-time and restricting the interface
to avoid shader permutation blow-up or shader recompile rendering
pauses).

...

How does each implementor produce the appropriate shader permutation
at the appropriate time? To my mind, the texture data should always be
colorspace-normalized RGBA entering the author's shader. If this is
the case, there is no limitation on incoming encoding and the concerns
over unification of single LOD TEXTURE_2D and TEXTURE_EXTERNAL are
unfounded. Any source of RGBA (e.g. DOM fragments for privileged
pages) should be able to become a dynamic texture producer.

If the implementors dynamically modify my shaders with branches for
colorspace conversions or recompile my shaders with each permutation
of incoming colorspace, I would like to be able to turn this off and
perform the conversion calls myself having knowledge of which
permutations will be needed and when.

...
YUV doesn't fit in a vec3? If the underlying colorspace of the data is
not exposed, it should also not effect the formats, types, encodings,
or availability of generic dynamic texture producers. This may be a
burden for implementors as you and Florian have discussed earlier this
thread.

I would be perfectly happy with either pole of texture data
availability: full colorspace exposure with a proliferation of sampler
types and an overloaded RGBA conversion function (switchable with
sampler type renaming macros) OR absolutely homogeneous colorspace
from every source without restriction.

Sampling does not need to be available for non-RGBA colorspaces
(samplerExternalYUV would be abstract and only consumed by
conversion). Colorspace conversion branching can be done at
compile-time using overloaded convertColorspaceRGBA(...) and sampler
type -> sampler type macro renames. With this design, the colorspace
sampler type names do not have to be standard -- simply available to
the page and overloaded in the GLSL conversion functions. This gives
the page author the most control and performance and is a superset of
the presently proposed functionality. The author is now free to choose
at what time the sampler conversion should be specialized and may
anticipate format changes in advance of their occurrence and compile
the appropriate shader before it is needed.
I don't entirely understand what you are proposing but the following points must be kept in mind:
TeX is a fine source language. DocBook is a fine target language. No
manual conversion is necessary.

Many HTML-of-TeX convertors exist not the least of which is HeVeA
<http://hevea.inria.fr/>.

I'm sure someone in our community would contribute a build system for
converting the present TeX source into (X)HTML provided the spec
source was made public.

Will Khronos publish the TeX source for the specification of the Open Standard?
The question has never been raised before and therefore never discussed.  I personally don't see any reason why not. I'll raise the question in Khronos.

As far as I'm concerned the only reason for using PDF is because you have everything in a single file. I do wish the browser makers would support MHTML or something like it.
Are OpenGL ES extensions drafted in text/plain as they are published?
I may or may not already have a parser for this format.
Yes they are drafted in text/plain. You find the template at http://www.opengl.org/registry/doc/template.txt. There was talk of moving to DocBook but it hasn't happened.

Regards

    -Mark

--