[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Issues with sharing resources across contexts



On Wed, Jul 18, 2012 at 1:32 PM, Ben Vanik <benvanik@google.com> wrote:
> Haha - no way you can rely on either JS GC or driver reuse - when you're
> trying to get a stable framerate and a robust memory footprint, that's the
> exact opposite of what you need to do. I've never once found a case where it
> was acceptable to do so, and doubt I ever will.

It doesn't have to be implicit but references need to be tracked (and
both the JS GC and the GL driver are already tracking them). You can
easily imagine a way to reuse resources if no other threads hold
references to them.

> My proposal is that multiple reader workers is ok (no sync or flushes
> needed), but there can be only one worker that can write (and can read with
> no sync/flush needed). The only time a sync/flush is required is when a
> worker goes to acquire an object for reading that had been unacquired for
> writing by another worker - an event that is very easy to identify when
> reading code and can be controlled by the user in a predictable way. For
> example, if you're double buffering between two workers that are alternating
> acquisition you'd never run into a case where you'd fail (attempt to acquire
> for reading something another worker has for writing), you'd never create
> garbage, and you could ping-pong buffers with almost no cost. Where this
> enables more flexibility than just transferring ownership is that you could
> ping-pong between multiple threads in the same frame -- frame N: update
> buffer A on thread X, draw buffer B on thread Y and Z, frame N+1: update
> buffer B, draw buffer A on thread Y and Z, etc.

How does this fail when Y tries to read B on frame N+1? Undefined
behavior? Exception?

> On Wed, Jul 18, 2012 at 1:22 PM, David Sheets <kosmo.zb@gmail.com> wrote:
>>
>> On Wed, Jul 18, 2012 at 1:08 PM, Gregg Tavares (社用) <gman@google.com>
>> wrote:
>> >
>> >
>> >
>> > On Wed, Jul 18, 2012 at 12:52 PM, David Sheets <kosmo.zb@gmail.com>
>> > wrote:
>> >>
>> >> On Mon, Jul 16, 2012 at 6:03 PM, Gregg Tavares (社用) <gman@google.com>
>> >> wrote:
>> >> > I agree that solution would work for 2+ contexts in a single webpage
>> >> > since
>> >> > webpages are single threaded. Is that enough for now? Do we care
>> >> > about
>> >> > WebWorkers where that solution will no longer be enough?
>> >> >
>> >> > Lemme throw out some ideas. They probably suck but maybe they'll jog
>> >> > some
>> >> > others.
>> >> >
>> >> > What if a resource could only be used in 1 context at a time and you
>> >> > had
>> >> > to
>> >> > transfer ownership to use it another context? That might solve this
>> >> > issue as
>> >> > well. Would that be too restrictive? What if an object could be used
>> >> > in
>> >> > any
>> >> > context but state only set in the context that "owns" it? Not sure
>> >> > that
>> >> > would help. In the case above it would mean any context could call
>> >> > UseProgram but only the context that owns it could call AttachShaders
>> >> > and
>> >> > LinkProgram since those change state. Of course that wouldn't solve
>> >> > every
>> >> > issue because UseProgram requires the results of LinkProgram.
>> >> >
>> >> > I'm just throwing that out there. The idea is probably not workable
>> >> > but
>> >> > it
>> >> > would be nice to try to design an WebGL API for WebWorkers that
>> >> > didn't
>> >> > have
>> >> > the ambiguity that OpenGL has as far as execution order. Whatever
>> >> > that
>> >> > solution is might also work for 2 contexts in the same page.
>> >>
>> >> What if a means is provided to make a given WebGL resource immutable?
>> >> Once the resource is 'frozen', no further modification would be
>> >> allowed but the resource could be shared between contexts or workers.
>> >>
>> >> What are the use cases for shared mutable WebGL resources?
>> >
>> >
>> > For 2 contexts in the same page that might fit some use cases.
>> >
>> > But mutable shared resources is what I think is the most common use case
>> > for
>> > WebWorkers though.
>> >
>> > WebWorker use cases
>> >
>> > *) Download images in a worker, decode and upload to texture
>> > *) Download geometry in a worker,
>> > *) Generate complex geometry in a worker (primitives, metaballs,
>> > booleans,
>> > subdivision surfaces)
>> > *) Do procedural textures in a work (like decoding video)
>> > *) CPU skinning in a worker
>> > *) Render to texture in a worker, display that texture in main page.
>>
>> I'm trying to understand how any of these use cases benefit from
>> mutation over creation of new resource objects. If you allow a worker
>> to 'own' a resource (and mutate) and share it read-only to the
>> rendering thread, how do you handle synchronization? If you have to
>> handle synchronization manually, why not simply create a new resource,
>> freeze it, and allow access from any thread with a handle?
>>
>> GC hooks can return GL resources to a separately managed resource pool
>> for quick mutable allocation.
>>
>> > Right now it seems like difference solutions for shared resources in the
>> > same page vs workers might be best. For same page contexts, auto flush
>> > as
>> > suggested by Ben Vanik, seems like a valid and relatively easy solution.
>> > It
>> > would "just work" for developers. It would be easy to write tests for as
>> > well I think.
>>
>> If many GL commands are queued, magic flushing penalizes performance
>> at unexpected times.
>>
>> > For workers, transfer of ownership of shared objects seems, off the top
>> > of
>> > my head, like it would work. You'd probably have to double buffer a
>> > textures
>> > you wanted to share often and double buffer buffers you wanted to use
>> > for
>> > dynamic geometry but having to pass ownership would I think, get rid of
>> > all
>> > the ambiguity and issues. This would also make it "just work" since
>> > there'd
>> > be no wrong way to use them.
>>
>> See synchronization question above.
>>
>> David
>>
>> > Both are relatively easy to implement and test so we can give it a try
>> > and
>> > people can play around and see how it feels?
>> >
>> >
>> >
>> >>
>> >>
>> >> David
>> >>
>> >> > On Mon, Jul 16, 2012 at 5:36 PM, Ben Vanik <benvanik@google.com>
>> >> > wrote:
>> >> >>
>> >> >> In previous systems I've worked with I've used implicit flushes when
>> >> >> swapping contexts and it made life much easier. Every call in the
>> >> >> graphics
>> >> >> layer basically had this:
>> >> >> void MyGraphicsAPI::SomeCall() {
>> >> >>   MakeCurrent();
>> >> >>   ...
>> >> >> };
>> >> >> void MyGraphicsAPI::MakeCurrent() {
>> >> >>   if (threadCurrentContext != this) {
>> >> >>     Flush();
>> >> >>   }
>> >> >>   // ... set global threadCurrentContext, etc
>> >> >> };
>> >> >>
>> >> >> Using some macro-fu it was made to be a single variable lookup and
>> >> >> branch
>> >> >> that predicted well per call, and in the common case of
>> >> >> single-context
>> >> >> apps
>> >> >> had little impact on performance. In apps with multiple contexts it
>> >> >> made
>> >> >> life much easier when dealing with the sometimes long and hairy code
>> >> >> sequences that touched both (rare, but often unavoidable). Had the
>> >> >> flush not
>> >> >> been implicit it would have required a significant amount of
>> >> >> bookkeeping
>> >> >> logic that every piece of code that touched a context would have to
>> >> >> interact
>> >> >> with - yuck. It also meant that code could be moved between apps
>> >> >> that
>> >> >> used
>> >> >> single contexts and multiple contexts without having to change, or
>> >> >> the
>> >> >> app
>> >> >> could even decide at runtime with no ill effect. Explicit flushes
>> >> >> would
>> >> >> have
>> >> >> made that a nightmare.
>> >> >>
>> >> >> The one downside of the implicit flushes is that it's easy to start
>> >> >> flushing all over the place without knowing about it. A simple
>> >> >> counter
>> >> >> of
>> >> >> flushes/frame was enough to help warn that they were happening
>> >> >> though,
>> >> >> as
>> >> >> the target was usually <5 and if you mess things up you would see
>> >> >> dozens.
>> >> >> It's also often much trickier to find missing explicit flushes that
>> >> >> cause
>> >> >> subtle and sometimes very hard to identify behavioral differences.
>> >> >> Just
>> >> >> look
>> >> >> at the number of WebGL pages out there today that emit warnings and
>> >> >> imagine
>> >> >> what it'd be like with this x_x
>> >> >>
>> >> >>
>> >> >> On Mon, Jul 16, 2012 at 5:18 PM, Gregg Tavares (社用)
>> >> >> <gman@google.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> So ........... I've been playing around with sharing resources
>> >> >>> across
>> >> >>> WebGL contexts and I'm running into issues and looking for
>> >> >>> solutions.
>> >> >>>
>> >> >>> The biggest issue is that GL is command buffer driven and so
>> >> >>> calling
>> >> >>> some
>> >> >>> GL function doesn't mean it's actually been executed. You have to
>> >> >>> call
>> >> >>> glFlush. This raises lots of issues of where a program might work
>> >> >>> on
>> >> >>> some
>> >> >>> browser / driver / platform combo but not others if the users
>> >> >>> forgets
>> >> >>> to
>> >> >>> call flush.
>> >> >>>
>> >> >>> For example, assume I have 2 contexts sharing resources. gl1, and
>> >> >>> gl2
>> >> >>>
>> >> >>>
>> >> >>>   var vs = gl1.createShader(gl1.VERTEX_SHADER);
>> >> >>>   var fs = gl1.createShader(gl1.FRAGMENT_SHADER);
>> >> >>>   //...
>> >> >>>   // assume shaders are validly compiled
>> >> >>>   // ...
>> >> >>>   var p = gl1.createProgram();
>> >> >>>   gl1.attachShader(p, vs);
>> >> >>>   gl1.attachShader(p, fs);
>> >> >>>   gl2.linkProgram(p);
>> >> >>>
>> >> >>> I attached on gl1 but linked on gl2. There's is no guarantee in GL
>> >> >>> that
>> >> >>> that link will succeed because the 2 attach commands may have only
>> >> >>> been
>> >> >>> queued and not excuted in which case the linkProgram call will fail
>> >> >>> with
>> >> >>> "missing shaders". The correct code is
>> >> >>>
>> >> >>>   var p = gl1.createProgram();
>> >> >>>   gl1.attachShader(p, vs);
>> >> >>>   gl1.attachShader(p, fs);
>> >> >>>   gl1.flush(); // <--- important
>> >> >>>   gl2.linkProgram(p);
>> >> >>>
>> >> >>> That seems unacceptable.
>> >> >>>
>> >> >>> 2 Approaches off the of my head
>> >> >>>
>> >> >>> 1) Try to make it just work without the flush
>> >> >>>
>> >> >>> One solution might be for the implementation to track which context
>> >> >>> last
>> >> >>> had a call. Any call to a different context causes an automatic
>> >> >>> implicit
>> >> >>> flush
>> >> >>>
>> >> >>> 2) Try to make it fail when not done correctly
>> >> >>>
>> >> >>> This solution would be to try to track of an object is "dirty"
>> >> >>> (state
>> >> >>> has
>> >> >>> been changed) and no flush has been issued since then for that
>> >> >>> object.
>> >> >>> When
>> >> >>> an object is used, if it's dirty then either a flush is issued
>> >> >>> (which
>> >> >>> is
>> >> >>> solution #1) or a error is generated. ("called function on dirty
>> >> >>> object, did
>> >> >>> you forget to call flush?")
>> >> >>>
>> >> >>> Neither solution seems ideal. Worse, whatever solution is chosen
>> >> >>> also
>> >> >>> has
>> >> >>> issues if we ever get WebGL in Workers.
>> >> >>>
>> >> >>> A 3rd solution is just to leave it with the flush required and
>> >> >>> forgetting
>> >> >>> means you get random success / failure. No such a great prospect.
>> >> >>> :-(
>> >> >>>
>> >> >>> Thoughts?
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>
>> >> >
>> >
>> >
>
>

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------