[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] CORS and resource provider awareness

Btw. that's the reason people have come up with stuff like mapBuffer
that put a restriction on what the client can do with that memory
(don't delete it) so the driver can read in the bytes whenever it
feels like, and let the client race ahead of the queue without needing
to hold the client up in an upload block. Of course mapBuffer can
result in garbled data (where the client has updated the buffer) but
this is intentional for such things as vertex streaming and the like
where calling bufferData all the time would slow the client down since
it'd be like calling finish all the time.

On Wed, Oct 31, 2012 at 11:00 AM, Florian Bösch <pyalot@gmail.com> wrote:
> On Wed, Oct 31, 2012 at 10:20 AM, Mark Callow <callow.mark@artspark.co.jp>
> wrote:
>> On 2012/10/31 17:27, Florian Bösch wrote:
>> Where is it written that bufferData and texImage2D do synchronization?
> The driver maintains a command queue of things tell the GPU to do. Each
> command may take an infinite amount of time. In order to speed up overall
> performance the driver lets the client race ahead of the queue, i.e. calls
> to the driver return before the commands have finished, this is known as
> asynchronous execution. If you call finish, then the driver blocks on that
> call until the command queue is empty. There are 3 different parts of memory
> that are usually involved in the process of shuffling data around that go
> with commands: 1) client memory 2) driver memory 3) gpu memory. Driver
> memory is usually quite fixed and by far and large doesn't implement any
> elaborate caching scheme, after all, you wouldn't want the driver to
> segfault with a memory overflow, that would be bad.
> So what happens if the driver receives a request to upload data from client
> memory to GPU memory? The driver reads the client memory and puts the bytes
> onto the GPU. It needs to block while it does that because it needs to be
> sure to have read all the clients bytes before the client would continue and
> do things like delete that memory area or change it, resulting in invalid or
> garbled data. So, how can the driver, with a queue full of things, that are
> not known to execute in finite time, know when the GPU has gotten all the
> bytes that the client requested to be uploaded? The answer is quite simple.
> Since the command to upload was the last command to be put in the queue, the
> driver knows that the GPU has done all the receiving of client bytes when
> the command queue is empty. Incidentially, that's the same as flush.
> So there.

You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl