[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Size of GL_FLOAT



Patrick, and others.

You got it right, this is what I mean and its not news but every day practice. When I choose to use 4-byte floats because 
they are the right precision I do not want to end up with gabs because an implementor of WebGL decides to implement 
only 8-byte floats. What the hardware vendor does in the GPU is a little less concern. But the layout of the data on the
CPU side is important.

I saw a lot of reactions to my mail. Are you all working at midnight? There are occasions when you like to use straight, 
separate arrays for points, normals tangents, etc. And there are occasions where you like to use them grouped together
in a struct. You would like to pass these arrays of structs directly as input to a shader. Also you like to have a clean way to 
specify this for the GPU. 

So this is what I mean (in C syntax), nothing new (see the famous red book) or is this stuff old school and are there
better ways.

A)

typedef struct
{
  GLfloat x;
  GLfloat y;
  GLfloat z;
} Vector3;

Vector3 points[];
Vector3 normals[];
Vector3 tangents[];

versus
B)

typedef struct
{
  Vector3 point;
  Vector3 normal;
  Vector3 tangent;
}  Vertex;

Vertex vertices[];

In C you would tell the compiler to pack the structs without gabs and compute the stride to pass them to OpenGL. How
would you specify a strong typed array of structs in _javascript_ and pass them efficiently to the GPU. Also you like to
access the members in _javascript_ by name without a lot of overhead.

e.g.

A)

mesh.points[i] = point;
mesh.normals[i].x = x;

B)

mesh.vertices[i].point = point;
mesh.vertices[i].normal.x = x;


Best regards,

Carl



Op 12 jan 2010, om 22:03 heeft Patrick Baggett het volgende geschreven:

I think what he's trying to say is that if you had 8-byte floats but your code assumed they were 4-byte floats, you'd end up with gaps in your data stream and wrong offset/stride calculations, none of which is news.

On Tue, Jan 12, 2010 at 2:02 PM, Kenneth Russell <kbr@google.com> wrote:
On Tue, Jan 12, 2010 at 12:30 AM, Carl van Heezik <carl@microcan.nl> wrote:
I see a lot discussion about the size of of GL_FLOAT. This is my opinion.

There is only one person that needs to know what the size of a variable is and that is the programmer that writes the program.
He needs to know if the variable is big enough for his application. A fixed size on every platform, the same number of bytes and
preferable the same byte order is the best solution. If a hardware vendor decides to put a float into a double that is his choice
but the interface for the programmer should be the same on every platform. There should be no gabs in buffers!!! WebGL is
based on OpenGL ES 2.0 which is targeted to portable devices where every byte counts. So please no gabs!! Waisting half the
memory because the hardware vendor uses a double instead of a float is madness! Please keep things simple.

There is no intent to leave gaps in buffers. The only question is whether to allow the possibility of OpenGL implementations which map GLfloat differently. On platforms that map it to a single-precision floating-point value there will be no waste of space; adjacent values in a WebGLFloatArray will be tightly packed in memory.

-Ken