[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Size of GL_FLOAT



On Tue, Jan 12, 2010 at 1:25 PM, Chris Marrin <cmarrin@apple.com> wrote:

On Jan 12, 2010, at 12:02 PM, Kenneth Russell wrote:

> On Tue, Jan 12, 2010 at 12:30 AM, Carl van Heezik <carl@microcan.nl> wrote:
> I see a lot discussion about the size of of GL_FLOAT. This is my opinion.
>
> There is only one person that needs to know what the size of a variable is and that is the programmer that writes the program.
> He needs to know if the variable is big enough for his application. A fixed size on every platform, the same number of bytes and
> preferable the same byte order is the best solution. If a hardware vendor decides to put a float into a double that is his choice
> but the interface for the programmer should be the same on every platform. There should be no gabs in buffers!!! WebGL is
> based on OpenGL ES 2.0 which is targeted to portable devices where every byte counts. So please no gabs!! Waisting half the
> memory because the hardware vendor uses a double instead of a float is madness! Please keep things simple.
>
> There is no intent to leave gaps in buffers. The only question is whether to allow the possibility of OpenGL implementations which map GLfloat differently. On platforms that map it to a single-precision floating-point value there will be no waste of space; adjacent values in a WebGLFloatArray will be tightly packed in memory.

This will be a bit of a rant, so please bear with me.

I think the real question is whether or not there exists or has ever existed an implementation of OpenGL that did not have 32 bit floats. If so, why would the GL_DOUBLE type exist?

I looked at the OpenGL 3.2 spec and it is really schizophrenic about floats. It has 10 bit floats, 11 bit floats, 16 bit float (called HALF_FLOAT, implying that a full float is 32 bits), and then floats and doubles. Then they have 64 bit integers. So it's happy to specify the exact number of bits for most things, but for floats it says they must be "at least 32 bits". But then it talks about floating point textures being either 32 or 16 bits.

We already specify the size of each type in the WebGLArray. That constrains what the vertex arrays can contain, which constrains the underlying OpenGL (or other) implementation. If a WebGLFloatArray contains 32 bit floats in every implementation and the VBO is sent in and defined as a buffer of FLOAT type, then WebGL constrains the type of FLOAT to be 32 bits.

This is one way of looking at it: that the WebGL spec implies constraints on the OpenGL implementation underneath, for example that it supports 32-bit floats as input data. Another way of looking at it is that WebGL conforms to the typedefs of the OpenGL implementation on the platform.

So wouldn't it be best to remove sizeInBytes() and replace it with constants for each supported WebGLArray type? This might be best done with a constant in each WebGLArray subtype (WebGLFloatArray.SIZE, WebGLUnsignedByteArray.SIZE, etc.).

Realistically I think that every OpenGL implementation out there will support the primitive data types currently in the WebGL spec, so it's OK with me if we make this change. I would suggest a name like WebGLFloatArray.ELEMENT_SIZE to be more clear about the meaning.

-Ken