[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async



Right, but this is a little weird. So basically it means you're allocating a memory region for use by the async readback upon every invocation. It's usually not a problem, but if you have a pipeline stall you could end up with hundreds of allocated buffers waiting for the GPU to catch up.

The way that's usually done by graphics programmers is to pre-allocate a limited number of such buffers (like 3) and when you're 3 buffers deep and the first not resolved, you don't emit more readbacks. You could emulate that behavior with the async readback extension, but you have to be aware that you have to, and the memory cost is hidden from you.

On Sat, Sep 30, 2017 at 7:04 PM, Kenneth Russell <kbr@google.com> wrote:
On Sat, Sep 30, 2017 at 5:12 AM, Florian Bösch <pyalot@gmail.com> wrote:
On Sat, Sep 30, 2017 at 1:17 AM, Kenneth Russell <kbr@google.com> wrote:
Different regions of shared memory between Chrome's renderer and GPU processes are used if there are multiple pipelined calls to getBufferSubDataAsync.
Effectively a readback buffer cache. How do you know when you can discard those copies you keep around?

As soon as the data is copied out into the client's ArrayBufferView, just before resolving the Promise.