[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WebGL2 and no mapBuffer/mapBufferRange



> On Mar 5, 2015, at 3:15 AM, Zhenyao Mo <zmo@chromium.org> wrote:
> 
> You also need to consider some browsers (for example, Chrome) runs GPU
> in a separate process.  Copying data out and copying data back is the
> only way to implement this.

I do not consider this to be a valid reason to recast MapBufferRange as GetBufferSubData. Doing so prevents any browser from modifying its architecture to support buffer mapping, which, since there is no GetBufferSubData in OpenGL ES 3, it is clear the GPU designers think is the better way to access data in buffers. Furthermore it removes any incentive to modify the architecture.

> There is some text in the section 3.7 of WebGL 2 spec.
> 
>  // MapBufferRange, in particular its read-only and write-only modes,
>  // can not be exposed safely to JavaScript. GetBufferSubData
>  // replaces it for the purpose of fetching data back from the GPU.
> 
> Basically we can't just return the pointer from glMapBufferRange to
> the javascript in read-only or write-only modes, because there is no
> mechanism to enforce read-only or write-only.
> 
> Therefore, we have to save the pointer, copy out the buffer range to a
> client mem, and return that to user.  When in UnmapBuffer, for write,
> we have to copy back the data to the saved pointer.
> 
> It does not give you anything more than bufferSubData / getBufferSubData.

It doesn’t seem very hard to specify read-only and write-only array buffers. As I am unfamiliar with browsers’ internal architectures I have no idea about the ease of implementing such things.

If implementation is a problem, how about requiring that MAP_READ_BIT | MAP_WRITE_BIT must always be specified?

Regards

    -Mark


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail