On Wed, Jul 11, 2012 at 5:58 PM, Gregg Tavares (社用) <
[email protected]> wrote:
> If we go the route of a flag I'll update the conformance tests to default to
> the 8bit flag (and of course add tests to test any new issues the flags
> bring up)
>
>
>
> On Wed, Jul 11, 2012 at 5:55 PM, Brandon Jones <
[email protected]> wrote:
>>
>> I rather like Ben's suggestion! Obviously you would want a way to query
>> the format of the buffer that was actually created in case you wanted to
>> adjust rendering behavior accordingly, but otherwise I have a hard time
>> imagining a situation where it wouldn't provide sufficient control.
>>
>> I would assume that under such a system omitting the bitDepth parameter
>> would imply that the platform should pick it's optimal format?
>>
>>
>> On Wed, Jul 11, 2012 at 5:39 PM, Ben Vanik <
[email protected]> wrote:
>>>
>>> I'd much prefer being able to specify the format of the buffer vs.
>>> 'optimizePerformance'/etc. Unfortunately, that would require querying the
>>> context (that hasn't yet been created), and likely constants off the
>>> context. Yuck. So here's my idea:
>>>
>>> We already have an 'alpha' value, so really what we'd needs a minimum
>>> bits-per-pixel. Let's say 'bitDepth' - then when I create contexts where I
>>> can take the lower quality I'd pass:
>>> { alpha: true, bitDepth: 4 } (could pick 4444 or 8888+)
>>> or
>>> { alpha: false, bitDepth: 4 } or { alpha: false, bitDepth: 5 } (could
>>> pick 565 or 888+)
>>> If I wanted high quality:
>>> { alpha: true|false, bitDepth:8 } (get what we have today)
>>>
>>> By making it a minimum and a request an implementation could ignore it
>>> entirely, pick what it knows is most optimal, and most importantly: never
>>> degrade the quality of an authors content unexpectedly. If I'm building a
>>> photo editor, for example, and requested a minimum bpp of 8, I would rather
>>> have context creation fail then give me back 565. As a minimum it also
>>> allows implementations to, in the possibly-not-too-far future use 16 or
>>> 32bit depths if it were more efficient or the browser found it easier to
>>> work with.
>>>
>>>
>>> On Wed, Jul 11, 2012 at 5:20 PM, Kenneth Russell <
[email protected]> wrote:
>>>>
>>>>
>>>> On Wed, Jul 11, 2012 at 5:06 PM, Vladimir Vukicevic
>>>> <
[email protected]> wrote:
>>>> >
>>>> >
>>>> > ----- Original Message -----
>>>> >> Then honestly I'd prefer to see WebGL not on those phones and
>>>> >> hopefully that will be one more reason not to buy them. Who ever
>>>> >> made that phone shouldn't be rewarded for making a crappy GPU. Let's
>>>> >> not go backward. That's just my opinion though.
>>>> >
>>>> > Err.. we're talking about current gen phones here. 565/4444 is faster
>>>> > than 888/8888 on Galaxy Nexus (SGX540) and HTC One X (Tegra 3) -- I'm not
>>>> > sure that you could call either of those a "crappy GPU" :). It's simply 2x
>>>> > the memory usage and bandwidth for some ops. There are definitely more
>>>> > optimizations that can be done, but I'd really like to see at the very least
>>>> > a new context creation flag for "optimizeQuality" (or the inverse,
>>>> > "optimizePerformance" if we want to have 8888 be the default) so that
>>>> > content authors can at least choose. I'd love to know what most mobile GL
>>>> > games are using these days, though!
>>>>
>>>> It's an interesting data point (that I didn't know before) that Unity
>>>> uses a 565 back buffer by default. Since Unity is used heavily in
>>>> mobile games, it indicates many developers are using a lower precision
>>>> color buffer.
>>>>
>>>> Given that current mobile GPUs get a significant speed boost from this
>>>> change, and the fact that it's been a desire from the beginning for
>>>> WebGL to work well on the existing crop of ES 2.0 phones, I agree it
>>>> sounds like a good idea to provide a context creation option. Can you
>>>> indicate which conformance tests would need to be updated to support
>>>> this?
>>>>
>>>> -Ken
>>>>
>>>> -----------------------------------------------------------
>>>> You are currently subscribed to
[email protected].
>>>> To unsubscribe, send an email to
[email protected] with
>>>> the following command in the body of your email:
>>>> unsubscribe public_webgl
>>>> -----------------------------------------------------------
>>>>
>>>
>>
>