[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] TypedArrays request.responseArrayBuffer, servers and endianness



On 09/29/2010 12:16 PM, Joshua Bell wrote:


On Wed, Sep 29, 2010 at 12:00 PM, alan@mechnicality.com <alan@mechnicality.com> wrote:
On 09/29/2010 11:12 AM, Oliver Hunt wrote:
To avoid byte order issues purely on the client side it is important to not try to use any tricks when writing down data. Âeg. if you want to fill a buffer with inidividual bytes you should use a byte array, if you want to store int32s you should use an int32 view, etc. ÂOtherwise you open up the possibility of byte order problems on the client side.
ÂÂ Â
So if I fill a buffer Âwith bytes that represent 'int32' data on the server how do I know which byte order the client expects the data to be in? That was the point of my original question. I realize that I can *assume* that byte order is little-endian for most common web browser implementations on Intel platforms. Obviously, I can also read headers from the request to determine the browser/OS - but that's hit and miss.

The usage that the spec implies to me is as follows:

// Grab the data
var uint8_array = ImaginarySynchronousXHRBinaryDataFetch(url);
var network_view = new DataView(uint8_array.buffer);

// Since you produced the resource, you know the endian-ness:
var network_endian = false; // big-endianÂ

// Now massage the data
var native_array = new Uint32Array(size);
for( var i = 0; i < size; ++i ) {
ÂÂ Ânative_array[i] = network_view.getUint32(i, network_endian);
}

While the above looks a bit silly in this case, I imagine the more common case will be where the data format is more complex so the massaging will not be a simple loop copying 32 bits at a time, e.g. reading raster or mesh binary file formats.

However, what I see your questions implying should also work:

var BIG_ENDIAN =Ânew Uint16Array(new Uint8Array([0x12, 0x34]).buffer)[0] === 0x1234;

var uint8_buffer;
if (BIG_ENDIAN) {
ÂÂ Âuint8_array = ImaginarySynchronousXHRBinaryDataFetch(url + "?big_endian");
}
else {
ÂÂ Âuint8_array = ImaginarySynchronousXHRBinaryDataFetch(url + "?little_endian");
}
var native_array = new Uint32Array(uint8_array.buffer);

This just pushes the processing from the client to the server, which is an application-specific trade-off to make.
Yes, and thats what I'm planning to do as generally I favor doing more on the server and less on the client.

As for my questions relating to the spec. Oliver (probably rightly) pointed out that it may not be a 'webgl' issue.

Thanks

Alan