[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] gl.sizeInBytes
On 1/10/2010 12:30 PM, Patrick Baggett wrote:
In section 5.13.3, the first table defines the different types of
WebGL[Type]Arrays, and in that process, it defines the size, down to
the bit, of the elements inside each array. Since these types are
already completely specified, what is the purpose of
WebGLContext::sizeInBytes()?
Or to put it another way, how would app handle sizeInBytes(FLOAT) == 8
if 5.13.3 defines WebGLFloatArray to be 32-bit floating point values?
Wouldn't it make more sense for WebGL[Type]Arrays to have elements of
size sizeInBytes([Type])? Or keep 5.13.3 and drop sizeInBytes() entirely?
sizeInBytes is intended to be a convenience function, so that you can
write 100 * gl.sizeInBytes(gl.FLOAT) instead of having a magic "4"
there. It will always return the same size values that are listed in
5.13.3. But I do think that we can do without it; if anything, we could
just define constants on the gl object, e.g. gl.FLOAT_SIZE, or perhaps
WebGLFloatArray.ELEMENT_SIZE or something (though the latter is pretty
wordy).
- Vlad
-----------------------------------------------------------
You are currently subscribe to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: