[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] gl.sizeInBytes



On Jan 10, 2010, at 5:10 PM, Vladimir Vukicevic wrote:

> On 1/10/2010 4:24 PM, Chris Marrin wrote:
>> On Jan 10, 2010, at 12:44 PM, Vladimir Vukicevic wrote:
>> 
>>   
>>> On 1/10/2010 12:30 PM, Patrick Baggett wrote:
>>>     
>>>> In section 5.13.3, the first table defines the different types of WebGL[Type]Arrays, and in that process, it defines the size, down to the bit, of the elements inside each array. Since these types are already completely specified, what is the purpose of WebGLContext::sizeInBytes()?
>>>> 
>>>> Or to put it another way, how would app handle sizeInBytes(FLOAT) == 8 if 5.13.3 defines WebGLFloatArray to be 32-bit floating point values? Wouldn't it make more sense for WebGL[Type]Arrays to have elements of size sizeInBytes([Type])? Or keep 5.13.3 and drop sizeInBytes() entirely?
>>>>       
>>> sizeInBytes is intended to be a convenience function, so that you can write 100 * gl.sizeInBytes(gl.FLOAT) instead of having a magic "4" there.  It will always return the same size values that are listed in 5.13.3.  But I do think that we can do without it; if anything, we could just define constants on the gl object, e.g. gl.FLOAT_SIZE, or perhaps WebGLFloatArray.ELEMENT_SIZE or something (though the latter is pretty wordy).
>>>     
>> 
>> Are you saying that we could not support a GL implementation that has 8 byte floats as their native floating point type, or 8 byte ints? I thought the entire point of that call was to support such things.
>>   
> 
> Interesting, that's not what I thought it would do -- I thought it was just a convenience to avoid having magic integer numbers all over the place.  We have no support for double-precision floating point arrays or 64-bit int arrays currently, and making the size of the various types potentially variable would both cause problems and I think be totally unnecessary for any GL implementation out there currently.  It would also make it much harder to use the arrays for dealing with any data read from disk/network.  Are there any GL implementations out there that have GL_FLOAT be a 64-bit double?  Reading the GL spec seems to say that 'float' only has a 32-bit minimum requirement, but I guess could be implemented using 64-bit doubles.  I doubt anyone does that, though.


I don't know of any. But my perception from Jon Leech was that such a thing was possible and nothing in the GL API would prevent it. But I'm not that concerned about it. I believe that for the forseeable future graphics hardware will at least provide optimal paths for 32 bit floats and 32 bit integers. And I think that when it does become practical to support doubles in a wide variety of embedded hardware, it will be done as a new type with new API calls, like it is on desktop OpenGL today.

With that said, I still don't have a problem with the sizeInBytes() call, even if it is to avoid "magic numbers" in the code.

-----
~Chris
cmarrin@apple.com




-----------------------------------------------------------
You are currently subscribe to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: