[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] VBO with 16 or 8 bit data incredibly slow

I recall Apple's documentation for iPhone/iPad using the PowerVR GPU to do the following:

1) Interleave data
2) Ensure that each datum begins on a 4-byte boundary.

For example:

Interleaved position/ ST coords


GL_FLOAT: ST begins at offset = 12. 12 % 4 == 0.
GL_SHORT: ST begins at offset = 6. This isn't a multiple of 4 bytes, so there is a perf. penalty.

Your model has what, 1.5M triangles? If there is a per-vertex penalty due to something like this, the total penalty could grow huge. I've have good results with using GL_SHORT on the iPhone, but I did also reorder stuff so that it followed all of Apple's rules.


On Mon, Jul 9, 2012 at 11:11 AM, Yvonne Jung <[email protected]> wrote:

Hello all,

to improve bandwidth and memory, we tried to pass 16 and 8 bit encoded attribute data via the gl.vertexAttribPointer() call.
According to the WebGL specification you can use e.g. SHORT and BYTE as data type, too, and on NVidia hardware this performs very good.
However, using a computer with ATI card -- or the iPad 3 -- it is incredibly _slow_: instead of real-time framerates as for the same model using FLOAT buffers, we don't even get 1 fps with Int18 or Int8 buffers.

Does ATI or the iPad's GPU require 32-bit alignment or might something else be the problem?
If you want to have a look at the models -- here is the link to the model using float buffers, which performs fine everywhere:
And here is the link to the same model using short and char buffers, which only performs good on NVidia GPUs:


You are currently subscribed to [email protected].
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:
unsubscribe public_webgl