[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Issues with sharing resources across contexts



Vladimir above mentioned the case of rendering to a canvas from a worker. I think we should look at their proposal once they make it public. But that specific use case is orthogonal to the issues of shared resources among multiple canvases and shared resources with workers.

So, back to the shared resources issues. Others have suggested passing the objects. So,

worker.postMessage({
   texture: someTexture,
});

would pass ownership of the WebGLTexture referenced by 'someTexture' to the worker. At that point using 'someTexture' in the main page would start generating INVALID_OPERATION just like a WebGLTexture created in another context does now.

That sounds simpler. Basically you dont' have to designate which objects are shared. You can just only use them in one thread (main or worker) at a time.




On Fri, Aug 24, 2012 at 8:18 AM, Carlos Scheidegger <carlos.scheidegger@gmail.com> wrote:
AM, Florian Bösch wrote:

On Fri, Aug 24, 2012 at 3:18 AM, Gregg Tavares (社用) <gman@google.com> wrote:
Actually, hmmm, I take this back.

It sounds like workers require a different solution than 2+ contexts in the same page. Ideally, if you have 2 contexts in the same page you shouldn't have to call acquire/release at all. Otherwise, the simple case of a 3D editor with multiple views, each view having a different canvas, would really suck as you'd have to acquire/release hundreds of resources.

Well, more food for thought.
The worker<->mainthread interaction is an interesting usecase. But I agree that a complicated synchronization pattern would make multi-view coding very hard. I think that maybe we should come up with two separate solutions to do each:
- worker<->mainthread API/interaction
- multi-view compositing where a "canvas" is just a proxy stand in for an RTT target that the compositor picks up to paste into the page and the user is responsible for filling the attached/associated texture upon animation frame callback. That could be solved elegantly by specifying that you can pass a canvas as color attachment. 

Speaking as someone who's developing an application where an arbitrary large number of WebGL canvas could be created, I want to strongly support this second suggestion.

How hard would it be to somehow specify that a "detached" WebGL context can be created, one for which drawing calls with a null framebuffer always fail? Then, gl.bindFramebufer(gl.FRAMEBUFFER, framebuffer_with_attached_canvas) would behave like Florian described.

The only change I would suggest is that instead of overloading framebufferTexture2D, the elegant solution would be to add a framebufferCanvas entry point. This way, it would be possible to eliminate the confusion that would arise from having to pass a texture target.

One possible minimal entry point would be something like

void framebufferCanvas(GLenum target, HTMLCanvasElement element)

This would make it clear that none of the possible parameters in framebufferTexture2D (texture target, attachments, and mipmap levels) are applicable for canvas targets. This call should behave roughly equivalently to creating a plain RTT texture with the right size and attach it to COLOR_ATTACHMENT0.

This is a big change for the spec, so I understand if it would be non-trivial to get right. But it would be absolutely fantastic if you do get it right. I routinely use ~100MB attribute buffers, and guaranteeing that different canvas elements share those attribute buffers would be a huge win. 

-carlos