[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Issues with sharing resources across contexts

I think it would be important to structure this around use-cases, why do we want more contexts and what will we do with them?

1) Putting multiple canvases on one page showing the same underlying data in different ways. For instance, multiple viewports of a 3d editor. I know some will object this is a job for gl.viewport, but it is not. Those viewport could be arranged in a UI by being their own dragable/resizable etc. "widgets". Similar use-cases can be found in many games.

2) Operating one context from multiple processes (i.e. web-workers) where for instance one web-worker would tessellate and upload to a VBO while the main process would do other things.

I can't think outright of other use-cases, but I'm sure there are some more structurally different.

Regarding the need to show the same underlaying data in multiple views, there are different solutions to this:
 a) create two gl contexts, have them share resources. This introduces the aforementioned synchronization issues.
 b) Operate one context with the ability to target multiple front buffers. For instance, kind of like FBOs, except the rendering is not "off screen" but you'd attach say, a canvas as rendertarget.

Regarding multi-process use of one context: If we write code like this at the moment we synchronize over the webworkers synchronization feature of buffers, where one buffer is held by only one worker at a time. Any kind of multi-threaded working will always need to properly synchronize to work, so I don't feel like making it possible to pass resources to workers which means the "own" them for the duration of their work is much different than passing a buffer to a worker, which means the same. It might be a bit more convenient to operate the API in the worker directly instead of sending around buffers, but probably not all that much. One added benefit of the resource is that if you emit a command to the buffer that would cause a finishing wait, that delay could be encapsulated in the worker rather than holding up the main thread.

On Tue, Jul 17, 2012 at 3:03 AM, Gregg Tavares (社用) <[email protected]> wrote:
I agree that solution would work for 2+ contexts in a single webpage since webpages are single threaded. Is that enough for now? Do we care about WebWorkers where that solution will no longer be enough?

Lemme throw out some ideas. They probably suck but maybe they'll jog some others. 

What if a resource could only be used in 1 context at a time and you had to transfer ownership to use it another context? That might solve this issue as well. Would that be too restrictive? What if an object could be used in any context but state only set in the context that "owns" it? Not sure that would help. In the case above it would mean any context could call UseProgram but only the context that owns it could call AttachShaders and LinkProgram since those change state. Of course that wouldn't solve every issue because UseProgram requires the results of LinkProgram.

I'm just throwing that out there. The idea is probably not workable but it would be nice to try to design an WebGL API for WebWorkers that didn't have the ambiguity that OpenGL has as far as execution order. Whatever that solution is might also work for 2 contexts in the same page.

On Mon, Jul 16, 2012 at 5:36 PM, Ben Vanik <[email protected]> wrote:
In previous systems I've worked with I've used implicit flushes when swapping contexts and it made life much easier. Every call in the graphics layer basically had this:
void MyGraphicsAPI::SomeCall() {
void MyGraphicsAPI::MakeCurrent() {
  if (threadCurrentContext != this) {
  // ... set global threadCurrentContext, etc

Using some macro-fu it was made to be a single variable lookup and branch that predicted well per call, and in the common case of single-context apps had little impact on performance. In apps with multiple contexts it made life much easier when dealing with the sometimes long and hairy code sequences that touched both (rare, but often unavoidable). Had the flush not been implicit it would have required a significant amount of bookkeeping logic that every piece of code that touched a context would have to interact with - yuck. It also meant that code could be moved between apps that used single contexts and multiple contexts without having to change, or the app could even decide at runtime with no ill effect. Explicit flushes would have made that a nightmare.

The one downside of the implicit flushes is that it's easy to start flushing all over the place without knowing about it. A simple counter of flushes/frame was enough to help warn that they were happening though, as the target was usually <5 and if you mess things up you would see dozens. It's also often much trickier to find missing explicit flushes that cause subtle and sometimes very hard to identify behavioral differences. Just look at the number of WebGL pages out there today that emit warnings and imagine what it'd be like with this x_x

On Mon, Jul 16, 2012 at 5:18 PM, Gregg Tavares (社用) <[email protected]> wrote:
So ........... I've been playing around with sharing resources across WebGL contexts and I'm running into issues and looking for solutions.

The biggest issue is that GL is command buffer driven and so calling some GL function doesn't mean it's actually been executed. You have to call glFlush. This raises lots of issues of where a program might work on some browser / driver / platform combo but not others if the users forgets to call flush.  

For example, assume I have 2 contexts sharing resources. gl1, and gl2

  var vs = gl1.createShader(gl1.VERTEX_SHADER);
  var fs = gl1.createShader(gl1.FRAGMENT_SHADER);
  // assume shaders are validly compiled
  // ...
  var p = gl1.createProgram();
  gl1.attachShader(p, vs);
  gl1.attachShader(p, fs);

I attached on gl1 but linked on gl2. There's is no guarantee in GL that that link will succeed because the 2 attach commands may have only been queued and not excuted in which case the linkProgram call will fail with "missing shaders". The correct code is

  var p = gl1.createProgram();
  gl1.attachShader(p, vs);
  gl1.attachShader(p, fs);
  gl1.flush(); // <--- important 

That seems unacceptable. 

2 Approaches off the of my head

1) Try to make it just work without the flush

One solution might be for the implementation to track which context last had a call. Any call to a different context causes an automatic implicit flush

2) Try to make it fail when not done correctly

This solution would be to try to track of an object is "dirty" (state has been changed) and no flush has been issued since then for that object. When an object is used, if it's dirty then either a flush is issued (which is solution #1) or a error is generated. ("called function on dirty object, did you forget to call flush?")

Neither solution seems ideal. Worse, whatever solution is chosen also has issues if we ever get WebGL in Workers.

A 3rd solution is just to leave it with the flush required and forgetting means you get random success / failure. No such a great prospect. :-(