[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WebGL2 and no mapBuffer/mapBufferRange



I've thought about this problem a bit, and I think the IPC JS-thread <-> GPU process and it being blocking is very similar to the issue of client <-> GPU being blocking. In both instances a call (buffer[Sub]Data) blocks because the action needs to complete in order to ensure all bytes are transferred before the client gets a chance to deallocate/change the data at that memory address.

I think it's clear that non blocking buffer transfers are a desirable functionality, no matter if they're from the JS-thread or from the GPU-process. The GPU process itself has for this very reason the ability to use address space mapping and defer the work (to shuffle the bytes to where they need to go) to the virtual memory manager.

This option is not open to the JS-thread <-> GPU-process interaction in this case. Although it would be possible to establish a shared memory region between a tabs process and the GPU-process and so achieve a non-blocking transfer, it is not possible to have that region also be at the address location that a glMapBuffer[Range] returns.

Nevertheless it is possible to achieve a similar functionality in the JS-thread <-> GPU-process interaction because the mechanism does not need to be tied to the virtual memory manager. Essentially the mechanism would just need to ensure that the bytes are transferred eventually at its convenience, and before gl.unmapBuffer is called.

The mechanism by which a non blocking transfer can happen between the JS-thread and the GPU-process could be either:
Both cases would allow the JS-thread to perform computing intensive tasks or emit further rendering commands while a transfer of the data happens in the background.

And so the benefit of non-blocking buffer transfer could be transported to JS, even though it does involve copying data between the JS-thread and GPU-process.

On Wed, Mar 4, 2015 at 8:37 PM, Floh <floooh@gmail.com> wrote:
I've been dabbling with multiple GL contexts on different threads
recently in desktop GL code, and multiple people who I see as quite
the GL experts and who've been through this before gave me the good
advice that 'it's not worth it', and that the whole topic is 'a world
of pain' mainly because the whole area not well specified, lousily
documented, drivers behave differently, and if they work, they still
suffer from thread synchronization issues. I guess that WebGL could
still mimic different contexts in worker threads and allow to call
WebGL functions from workers, as long as the calls can be queued
efficiently to the 'main GL context' under the hood. IMHO the most
interesting topic for parallelization, and which could also be
achieved in WebGL and WebGL2 is resource setup, not trying to
parallelize all types of WebGL calls.

-Floh

On Wed, Mar 4, 2015 at 7:22 PM, Florian Bösch <pyalot@gmail.com> wrote:
> Hm, I see. Though the blocking behavior might differ from when the browser
> transfers data between JS-thread and GPU-process, than when the GPU-process
> exchanges with the GPU. Also it's not outside the realm of possibilities
> that at some point WebGL contexts might stand in for real contexts, and
> WebGL code runs in a dedicated WebWorker process that owns that context.
>
> On Wed, Mar 4, 2015 at 7:15 PM, Zhenyao Mo <zmo@chromium.org> wrote:
>>
>> On Wed, Mar 4, 2015 at 10:10 AM, Florian Bösch <pyalot@gmail.com> wrote:
>> > On Wed, Mar 4, 2015 at 7:07 PM, Zhenyao Mo <zmo@chromium.org> wrote:
>> >>
>> >> Basically we can't just return the pointer from glMapBufferRange to
>> >> the _javascript_ in read-only or write-only modes, because there is no
>> >> mechanism to enforce read-only or write-only.
>> >
>> > So what you're saying is there isn't a ReadOnlyArrayBuffer and
>> > WriteOnlyArrayBuffer correct? If I'm not mistaken, returning ArrayBuffer
>> > conformant objects, which impose additional restrictions shouldn't be
>> > terribly hard.
>>
>>
>> You also need to consider some browsers (for example, Chrome) runs GPU
>> in a separate process.  Copying data out and copying data back is the
>> only way to implement this.
>
>