[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] gl.sizeInBytes



On Mon, Jan 11, 2010 at 12:31 PM, Vladimir Vukicevic <vladimir@mozilla.com> wrote:
On 1/11/2010 11:48 AM, Kenneth Russell wrote:
On Mon, Jan 11, 2010 at 10:37 AM, Chris Marrin <cmarrin@apple.com> wrote:

On Jan 10, 2010, at 11:44 PM, Patrick Baggett wrote:

> It is hardly a matter of "does a GL implementation have 64-bit GL_FLOATs", but more of "the WebGL spec explicitly states the size of its types." -- the latter entirely shutting off the concept of "implementation dependent" sizes.
>
> Vlad's right, even if gl.sizeInBytes(GL_FLOAT) did return 8 (double prec.) there would be no way to efficiently/portably buffer the data.


So then maybe it would be better to replace these with constants (ctx.FLOAT_SIZE, ctx.INT_SIZE, ctx.UNSIGNED_SHORT_SIZE), etc.?

I think we should leave sizeInBytes as a function rather than defining constants. On a hypothetical platform which defined GLfloat as a double, the WebGL implementation would be responsible for making WebGLFloatArray manage double-precision rather than single-precision floating point numbers.

Hmm.. I don't think it should do this, see below.


As we consider proposing broader use of these array-like types, we will have to specify the exact size of the machine types they manage. However, the mapping between e.g. WebGLFloatArray to e.g. FloatArray vs. DoubleArray would need to be flexible.

We already have the exact size of the machine types specified for the WebGL Arrays; I think that this needs to remain the case, because otherwise we have the problem that people will just assume 4 bytes anyway because it's currently the probably-100% case and the world breaks if there is an 8-byte "GL_FLOAT" platform.  Otherwise, people have to use sizeInBytes constantly to get correct portable behaviour, and we've tried pretty hard to avoid requirements like that (e.g UniformIndex and friends)..

You're right, the machine types are currently specified for the WebGL arrays.

For completely portable behavior, we could consider changing the WebGL spec to say for example that the WebGLFloatArray contains floating-point values compatible with the GLfloat typedef on the host platform.

I agree that realistically no OpenGL implementation is going to typedef GLfloat to double. However, if there were one that did, it would be more likely that a C program would work after recompilation than a WebGL program, because struct alignment and the sizeof operator would "just work". If we keep the sizeInBytes function and encourage programmers to use it, WebGL code could be as robust.

-Ken