On 1/11/2010 11:48 AM, Kenneth Russell wrote:
On Mon, Jan 11, 2010 at 10:37 AM, Chris Marrin <firstname.lastname@example.org>
So then maybe it would be better to replace these with constants
(ctx.FLOAT_SIZE, ctx.INT_SIZE, ctx.UNSIGNED_SHORT_SIZE), etc.?
On Jan 10, 2010, at 11:44 PM, Patrick Baggett wrote:
> It is hardly a matter of "does a GL implementation have 64-bit
GL_FLOATs", but more of "the WebGL spec explicitly states the size of
its types." -- the latter entirely shutting off the concept of
"implementation dependent" sizes.
> Vlad's right, even if gl.sizeInBytes(GL_FLOAT) did return 8
(double prec.) there would be no way to efficiently/portably buffer the
I think we should leave sizeInBytes as a function rather than
defining constants. On a hypothetical platform which defined GLfloat as
a double, the WebGL implementation would be responsible for making
WebGLFloatArray manage double-precision rather than single-precision
floating point numbers.
Hmm.. I don't think it should do this, see below.
As we consider proposing broader use of these array-like types,
we will have to specify the exact size of the machine types they
manage. However, the mapping between e.g. WebGLFloatArray to e.g.
FloatArray vs. DoubleArray would need to be flexible.
We already have the exact size of the machine types specified for the
WebGL Arrays; I think that this needs to remain the case, because
otherwise we have the problem that people will just assume 4 bytes
anyway because it's currently the probably-100% case and the world
breaks if there is an 8-byte "GL_FLOAT" platform. Otherwise, people
have to use sizeInBytes constantly to get correct portable behaviour,
and we've tried pretty hard to avoid requirements like that (e.g
UniformIndex and friends)..