[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Few questions/comments about context creation attributes

On Sat, May 22, 2010 at 2:16 PM, Cedric Vivier <cedricv@neonux.com> wrote:
> On Sun, May 23, 2010 at 02:25, Vladimir Vukicevic <vladimir@mozilla.com> wrote:
>> We actually considered this a while ago, and it seemed that the best thing
>> to do for WebGL 1.0 was to leave this up to the WebGL implementation
> Sure enough.
>> same for alpha/depth/stencil; you can request those, but you have no way to
>> request a specific size.  (...) In the future we could extend the WebGL context attributes object to allow
>> specifying these things in more detail.  (It wouldn't break compat -- we'd
>> have to add something like minDepth and maxDepth anyway, so it would be just
>> additions.)
> Context creation is traditionally not about requesting a specific size
> (or range) but minimal sizes that must be met or exceeded (ie. the
> driver/hardware can actually do as fast or faster with a better
> precision).
> Adding minX maxX for every attribute (depth, alpha, stencil, and then
> at a later point in WebGL's life red, green and blue for consistency)
> sounds overkill : more paths to implement, more to specify (including
> 'silly' stuff like behavior when both minDepth: and depth: are
> specified"), and does not seem to bring any obvious benefit compared
> to having integers that work like their siblings in EGL/GLX/...
> For instance, following code would work with all implementations while
> automatically benefiting from better support in any of them later on
> without a new revision of the spec required :
> var gl = getContext("webgl", {depth: 24}); // request a context with
> at least 24-bit depth buffer, less z-fighting yay
> if (!gl) gl = getContext("webgl"); // couldn't get a context with
> higher precision zbuffer (WebGL implementation or driver do not
> support it), just use default then

We discussed this at length earlier in the working group. The basic
question is whether getContext("webgl") should ever return null if the
hardware is capable of running WebGL. The earlier consensus was that
it is easier to program to a "closest fit" context selection model
than a "minimum requirements" model. When choosing the current
behavior, we considered the following:

1. Two independent libraries that attempt to request specific context
creation attributes, and how well they compose.
2. How much code needs to be written by the developer to ensure that
the created context supports their requirements.
3. The common case where the developer does not care about anything
aside from whether a WebGL context was created at all.

For these reasons WebGL uses a closest fit selection model, and
ignores attempts to specify context creation attributes the second and
subsequent times a context is fetched from a canvas (as opposed to
throwing an exception or returning null). Developers that care about
the specific capabilities of the created context can check them using

> If this kind of forward-compatibility is feared, spec 1.0 could state
> that the only valid values are 0 or 16, giving the same behavior as
> the boolean while following standard EGL/GLX/... practice and removing
> the need to add members later at the same time.
> On a different but related note, shouldn't WebGL be more conservative
> with context creation by default ? What was the rationale for
> providing a depth and stencil buffer by default ? Convenience ?
> While I can understand for depth, I'm not sure to understand why every
> WebGL context should be created with a stencil buffer by default. The
> feature being not that commonly used (is there even one WebGL demo
> enabling STENCIL_TEST currently?), most WebGL contexts - created with
> default attributes - will pay for a feature they do not use,
> potentially impacting performance a bit but at least for sure reducing
> the amount of video memory available for textures and buffers.
> For both depth and stencil, while any developer who needs these
> feature will enable them at context creation (one already has to
> enable DEPTH_TEST and/or STENCIL_TEST anyways!), it is likely that
> very few developers will think/remember disabling them when they don't
> need one or both, making WebGL content in general use more memory and
> less efficient than it could be.

Chris Marrin can probably comment on the decision to enable the
stencil buffer by default. Note that multisampling is also enabled by
default. These are probably the most unsurprising defaults for web
developers, and those who care about memory consumption can turn one
or the other feature off.


You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: