On Jan 12, 2010, at 3:20 PM, Kenneth Russell wrote:
> On Tue, Jan 12, 2010 at 3:12 PM, Chris Marrin <
cmarrin@apple.com>
wrote:
>
> On Jan 12, 2010, at 3:08 PM, Kenneth Russell wrote:
>
> > ...We already specify the size of each type in the
WebGLArray. That constrains what the vertex arrays can contain, which
constrains the underlying OpenGL (or other) implementation. If a
WebGLFloatArray contains 32 bit floats in every implementation and the
VBO is sent in and defined as a buffer of FLOAT type, then WebGL
constrains the type of FLOAT to be 32 bits.
> >
> > This is one way of looking at it: that the WebGL spec implies
constraints on the OpenGL implementation underneath, for example that
it supports 32-bit floats as input data. Another way of looking at it
is that WebGL conforms to the typedefs of the OpenGL implementation on
the platform.
> >
> > So wouldn't it be best to remove sizeInBytes() and replace it
with constants for each supported WebGLArray type? This might be best
done with a constant in each WebGLArray subtype (WebGLFloatArray.SIZE,
WebGLUnsignedByteArray.SIZE, etc.).
> >
> > Realistically I think that every OpenGL implementation out
there will support the primitive data types currently in the WebGL
spec, so it's OK with me if we make this change. I would suggest a name
like WebGLFloatArray.ELEMENT_SIZE to be more clear about the meaning.
>
> ELEMENT_SIZE makes me thing of elements in an array, not bytes.
Maybe we should just go with SIZE_IN_BYTES (as wordy as that is), or
BYTE_SIZE? :-)
>
> I don't like SIZE_IN_BYTES or BYTE_SIZE because we already have a
method called byteLength(). We should have an indication that we're
talking about the size of one element in the array.