[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] webgl tests seem to require 24/32-bit canvas buffer

If we can add egl-like APIs to canvas, a related approach would be to change format enumeration from format selection. This can be treated as an additional API that is more like the way you make a context on other platforms, but preserves the existing convenient API that would be redefined in terms of this API:

There's a new object called "Config" or "ContextAttributes"(maybe based on eglConfig). It has properties that represent the things WebGL lets you specify in a format (e.g. bit depth, multisample frequency, buffer flipping policy). Maybe it looks like the existing contextAttributes for canvas.GetContext.

canvas.GetWebGLConfigs(opt_attrib): takes an optional eglConfig-like object that specifies minimums. It returns a list of eglConfigs, possibly sorted through some to-be-spec'd policy. This is the only way to access an actual eglConfig object.

canvas.CreateContextFromConfig(config): takes an eglConfig, and returns a context that matches the eglConfig exactly. Unlike GetWebGLConfigs, it only takes an actual eglConfig object, and not just some JSON-y version that specifies parameters.

canvas.GetContext("3d", attribs) ends up being implemented equivalently to:


The existing semantics are perserved whlie people who want fine-grained control over what pixel format they want can simply inspect the results of GetWebGLConfigs with their own policy.

A variant on this idea would be to map the Get/Create behavior onto the existing canvas.GetContext API by using additional parameters to the attributes. GetWebGLConfigs could have a "listOnly: true" parameter, CreateContext could be triggered an "exactMatch: true" parameter. These attributes would default to false, and only one could be true at a time.


On Wed, Jul 11, 2012 at 9:13 PM, Kenneth Russell <[email protected]> wrote:
Specifying minimum bit depths in the context attributes is a good
idea. This was discussed at a face-to-face meeting some time ago. The
reason the creation attributes are specified as flags rather than bit
depths was to keep things simple for the first iteration of the spec.
Specifying a pixel format selection algorithm is complicated. Look at
the manual page for glXChooseFBConfig as an example. WGL, GLX and Mac
OS X's CGL all have slightly different pixel format selection
algorithms. I don't know exactly how these have evolved in more recent
OpenGL versions (in particular, as it's become possible to request a
version of a particular context). Perhaps the algorithms are specified
in a more similar manner across platforms nowadays.

There was an expectation that the types of these fields could be
upgraded from booleans to integers without breaking code, since in
_javascript_ they'll convert automatically. However, depending on the
semantics, that might not be true in practice. (If the "true" flag
were interpreted as meaning a minimum bit depth of 1, that might cause
problems for some applications.)

Ben: how would you adjust your proposal to handle specifying stencil
depths, depth buffer depths, etc.?


On Wed, Jul 11, 2012 at 5:58 PM, Gregg Tavares (社用) <[email protected]> wrote:
> If we go the route of a flag I'll update the conformance tests to default to
> the 8bit flag (and of course add tests to test any new issues the flags
> bring up)
> On Wed, Jul 11, 2012 at 5:55 PM, Brandon Jones <[email protected]> wrote:
>> I rather like Ben's suggestion! Obviously you would want a way to query
>> the format of the buffer that was actually created in case you wanted to
>> adjust rendering behavior accordingly, but otherwise I have a hard time
>> imagining a situation where it wouldn't provide sufficient control.
>> I would assume that under such a system omitting the bitDepth parameter
>> would imply that the platform should pick it's optimal format?
>> On Wed, Jul 11, 2012 at 5:39 PM, Ben Vanik <[email protected]> wrote:
>>> I'd much prefer being able to specify the format of the buffer vs.
>>> 'optimizePerformance'/etc. Unfortunately, that would require querying the
>>> context (that hasn't yet been created), and likely constants off the
>>> context. Yuck. So here's my idea:
>>> We already have an 'alpha' value, so really what we'd needs a minimum
>>> bits-per-pixel. Let's say 'bitDepth' - then when I create contexts where I
>>> can take the lower quality I'd pass:
>>> { alpha: true, bitDepth: 4 } (could pick 4444 or 8888+)
>>> or
>>> { alpha: false, bitDepth: 4 } or { alpha: false, bitDepth: 5 } (could
>>> pick 565 or 888+)
>>> If I wanted high quality:
>>> { alpha: true|false, bitDepth:8 } (get what we have today)
>>> By making it a minimum and a request an implementation could ignore it
>>> entirely, pick what it knows is most optimal, and most importantly: never
>>> degrade the quality of an authors content unexpectedly. If I'm building a
>>> photo editor, for example, and requested a minimum bpp of 8, I would rather
>>> have context creation fail then give me back 565. As a minimum it also
>>> allows implementations to, in the possibly-not-too-far future use 16 or
>>> 32bit depths if it were more efficient or the browser found it easier to
>>> work with.
>>> On Wed, Jul 11, 2012 at 5:20 PM, Kenneth Russell <[email protected]> wrote:
>>>> On Wed, Jul 11, 2012 at 5:06 PM, Vladimir Vukicevic
>>>> <[email protected]> wrote:
>>>> >
>>>> >
>>>> > ----- Original Message -----
>>>> >> Then honestly I'd prefer to see WebGL not on those phones and
>>>> >> hopefully that will be one more reason not to buy them. Who ever
>>>> >> made that phone shouldn't be rewarded for making a crappy GPU. Let's
>>>> >> not go backward. That's just my opinion though.
>>>> >
>>>> > Err.. we're talking about current gen phones here.  565/4444 is faster
>>>> > than 888/8888 on Galaxy Nexus (SGX540) and HTC One X (Tegra 3) -- I'm not
>>>> > sure that you could call either of those a "crappy GPU" :).  It's simply 2x
>>>> > the memory usage and bandwidth for some ops.  There are definitely more
>>>> > optimizations that can be done, but I'd really like to see at the very least
>>>> > a new context creation flag for "optimizeQuality" (or the inverse,
>>>> > "optimizePerformance" if we want to have 8888 be the default) so that
>>>> > content authors can at least choose.  I'd love to know what most mobile GL
>>>> > games are using these days, though!
>>>> It's an interesting data point (that I didn't know before) that Unity
>>>> uses a 565 back buffer by default. Since Unity is used heavily in
>>>> mobile games, it indicates many developers are using a lower precision
>>>> color buffer.
>>>> Given that current mobile GPUs get a significant speed boost from this
>>>> change, and the fact that it's been a desire from the beginning for
>>>> WebGL to work well on the existing crop of ES 2.0 phones, I agree it
>>>> sounds like a good idea to provide a context creation option. Can you
>>>> indicate which conformance tests would need to be updated to support
>>>> this?
>>>> -Ken
>>>> -----------------------------------------------------------
>>>> You are currently subscribed to [email protected].
>>>> To unsubscribe, send an email to [email protected] with
>>>> the following command in the body of your email:
>>>> unsubscribe public_webgl
>>>> -----------------------------------------------------------