[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WebGL 1.0 ratified and released



On Wed, Mar 9, 2011 at 10:40 AM, Kenneth Russell <kbr@google.com> wrote:
> On Tue, Mar 8, 2011 at 6:02 PM, David Sheets <kosmo.zb@gmail.com> wrote:
>>
>> Speaking of 16-bit depth buffers... any chance of getting OES_depth24
>> ported to WebGL?
>
> The WebGL working group agreed in a recent meeting that OES_depth24
> would be incorporated into the WebGL extension registry. Follow the
> instructions on http://www.khronos.org/registry/webgl/extensions/ for
> checking out the repository and note that this is one of the
> extensions mentioned in the HTML comments.

Good to know! I'll watch the registry repository from now on.

> All that's needed is to draft the extension spec and attach it as a
> patch to a WebGL bug filed under http://www.khronos.org/bugzilla/ . Of
> course, the various vendors would need to start supporting it.

I might have a few spare cycles to do this soon.

>> WebKit nightly appears to always use 16-bit depth buffers on a test
>> machine even when 24-bit depth buffers are available. On this same
>> machine (late 2010 iMac), Chrome renders with 24-bit depth but refuses
>> to create a 24-bit depth attachment. Firefox 4 beta 12 always upgrades
>> the precision.
>
> Please file a bug on https://bugs.webkit.org/ under component WebGL
> about the 16-bit depth buffer with WebKit nightlies.

https://bugs.webkit.org/show_bug.cgi?id=56074

> How are you attempting to create a 24-bit depth attachment with
> Chrome? There are currently no specified enums in WebGL for anything
> except 16-bit depth renderbuffers.

DEPTH_BITS reports 24. Chrome renders with a 24-bit depth buffer.
Custom attachments are only 16-bit (just as I "request"). This only
occurs on the iMac test machine with specs at
https://bugs.webkit.org/show_bug.cgi?id=56074

Chrome delivers 24-bit depth buffers when available on all other
machines I have tested.

It seems that depth buffers should either consistently get upgraded to
higher precision (say, what DEPTH_BITS reports) or provide only the
minimum and require that I enable OES_depth24 and use the higher
precision enums.

> I believe, but am not 100% sure, that all of the depth attachments are
> fixed point -- or, to be more precise, that they are stored as integer
> values from 0..2^n-1, where n is the number of bits in the depth
> buffer.

> I don't know about upgrading from fixed-point to floating-point depth
> buffers, but it is definitely legal for an OpenGL ES 2.0
> implementation to use more bits of precision than the user requested.
> See the documentation for RenderbufferStorage, last paragraph, page
> 112 in the OpenGL ES 2.0 spec version 2.0.25.

This all appears correct. I was misremembering the ARB_texture_float
precision fallback decision. I can't find anything in the specs that
disallows promoting integer to float or low precision to high
precision.

David

> -Ken
>
>> Thanks,
>>
>> David Sheets
>>
>> On Tue, Mar 8, 2011 at 5:45 PM, Chris Marrin <cmarrin@apple.com> wrote:
>>>
>>>
>>> On Mar 8, 2011, at 1:30 PM, L. Van Warren wrote:
>>>
>>>> I’m an old graphics guy, who trained in the Utah Graphics Lab in the mid 1980’s.
>>>>
>>>> Looking over the new WebGL standard gives a moment for reflection of the great strides that have been taken.
>>>>
>>>> Public standards like this are really important.
>>>>
>>>> My only concern is that double precision arithmetic is not being supported.
>>>>
>>>> Perhaps with modern graphics GPU’s 64 bit arithmetic is a bottleneck or a hassle, I don’t know.
>>>>
>>>> I just recall a family of problems which went away when we computed in double precision.
>>>>
>>>> The images were much crisper and easier to filter properly.
>>>>
>>>> These issues came up  in intersection calculations that were numerically brittle, such as the intersections of nearly parallel lines in perspective, etc.
>>>>
>>>> Other than this it looks great.
>>>
>>> The Typed Array spec does support double precision floats and all numbers in JS are doubles. It's only WebGL that can't accept that data to communicate with the GPU and that is a limitation of OpenGL ES 2.0. The OpenGL API "syntax" deals with double precision data just fine and I have no doubt that a future version of GLES (and therefore WebGL) will support it. Heck there's still some embedded hardware that is limited to 16 bit depth buffers!
>>>
>>> -----
>>> ~Chris
>>> cmarrin@apple.com
>>>
>>>
>>>
>>>
>>>
>>> -----------------------------------------------------------
>>> You are currently subscribed to public_webgl@khronos.org.
>>> To unsubscribe, send an email to majordomo@khronos.org with
>>> the following command in the body of your email:
>>> unsubscribe public_webgl
>>> -----------------------------------------------------------
>>>
>>>
>>
>> -----------------------------------------------------------
>> You are currently subscribed to public_webgl@khronos.org.
>> To unsubscribe, send an email to majordomo@khronos.org with
>> the following command in the body of your email:
>> unsubscribe public_webgl
>> -----------------------------------------------------------
>>
>>
>

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------