[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] webgl tests seem to require 24/32-bit canvas buffer



I rather like Ben's suggestion! Obviously you would want a way to query the format of the buffer that was actually created in case you wanted to adjust rendering behavior accordingly, but otherwise I have a hard time imagining a situation where it wouldn't provide sufficient control.

I would assume that under such a system omitting the bitDepth parameter would imply that the platform should pick it's optimal format?

On Wed, Jul 11, 2012 at 5:39 PM, Ben Vanik <benvanik@google.com> wrote:
I'd much prefer being able to specify the format of the buffer vs. 'optimizePerformance'/etc. Unfortunately, that would require querying the context (that hasn't yet been created), and likely constants off the context. Yuck. So here's my idea:

We already have an 'alpha' value, so really what we'd needs a minimum bits-per-pixel. Let's say 'bitDepth' - then when I create contexts where I can take the lower quality I'd pass:
{ alpha: true, bitDepth: 4 } (could pick 4444 or 8888+)
or
{ alpha: false, bitDepth: 4 } or { alpha: false, bitDepth: 5 } (could pick 565 or 888+)
If I wanted high quality:
{ alpha: true|false, bitDepth:8 } (get what we have today)

By making it a minimum and a request an implementation could ignore it entirely, pick what it knows is most optimal, and most importantly: never degrade the quality of an authors content unexpectedly. If I'm building a photo editor, for example, and requested a minimum bpp of 8, I would rather have context creation fail then give me back 565. As a minimum it also allows implementations to, in the possibly-not-too-far future use 16 or 32bit depths if it were more efficient or the browser found it easier to work with.


On Wed, Jul 11, 2012 at 5:20 PM, Kenneth Russell <kbr@google.com> wrote:

On Wed, Jul 11, 2012 at 5:06 PM, Vladimir Vukicevic
<vladimir@mozilla.com> wrote:
>
>
> ----- Original Message -----
>> Then honestly I'd prefer to see WebGL not on those phones and
>> hopefully that will be one more reason not to buy them. Who ever
>> made that phone shouldn't be rewarded for making a crappy GPU. Let's
>> not go backward. That's just my opinion though.
>
> Err.. we're talking about current gen phones here.  565/4444 is faster than 888/8888 on Galaxy Nexus (SGX540) and HTC One X (Tegra 3) -- I'm not sure that you could call either of those a "crappy GPU" :).  It's simply 2x the memory usage and bandwidth for some ops.  There are definitely more optimizations that can be done, but I'd really like to see at the very least a new context creation flag for "optimizeQuality" (or the inverse, "optimizePerformance" if we want to have 8888 be the default) so that content authors can at least choose.  I'd love to know what most mobile GL games are using these days, though!

It's an interesting data point (that I didn't know before) that Unity
uses a 565 back buffer by default. Since Unity is used heavily in
mobile games, it indicates many developers are using a lower precision
color buffer.

Given that current mobile GPUs get a significant speed boost from this
change, and the fact that it's been a desire from the beginning for
WebGL to work well on the existing crop of ES 2.0 phones, I agree it
sounds like a good idea to provide a context creation option. Can you
indicate which conformance tests would need to be updated to support
this?

-Ken

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------