[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Issues with sharing resources across contexts



On Fri, Aug 24, 2012 at 7:52 PM, Gregg Tavares (社用) <gman@google.com> wrote:
or similar that would mean sharing resources on the main page is not needed.
That's exactly my thinking, it would simplify the design of the resource sharing because you wouldn't have to consider main page canvasii compositing.
 
Would this work

var canvas1 = document.createElement("canvas");
var canvas2 = document.createElement("canvas");
var gl1 = canvas1.getContext("webgl");
var gl2 = canvas2.getContext("webgl");

//now both canvases are webgl canvases

// swap them just for fun
gl1.bindFramebuffer(gl.FRAMEBUFFER, canvas2);
gl2.bindFramebuffer(gl.FRAMEBUFFER, canvas1);

It seems like a little bit of a waste to have to create the second context just so the canvas is a WebGL canvas

Also, ti seems like overloading the meaning of bindFramebuffer might not be the best method since there all various operations you can do to currently bound framebuffer. So maybe it should be something like

gl.bindBackbuffer(canvas)

or

gl.bindCanvas(canvas)

and then like normal, calling 

gl.bindFramebuffer(gl.FRAMEBUFFER,null)

renders to the current backbuffer/canvas with all the limits that normally entails (can't call framebufferRenderbuffer or framebufrferTexture2D on it)
Semantically that all works for me, I find I wouldn't have a preference if we call this attaching a buffer to an FBO or binding a different frontbuffer.

There's one area where the FBO semantic does have an advantage though. If you display partials (like say albedo of a scene) and then re-use the filled depth buffer to render something else (like say deferred lighting) and you have, for some reason, the desire to show both on the page (It's not contrieved, I promise, I have such an example right now, releasing it in the next few days). Since there isn't a way to attach (or even explicitly obtain) the depth buffer for use on the front, you couldn't wire your rendering together to share it. So what you'd end up doing would be to render to an FBO and then sample that FBO to blit to the canvas. It's still way better than the alternatives, just slightly inelegant (if you could do it more directly). On the other hand, if you feel the API is much cleaner with the bind frontbuffer semantic, I don't think that's a big drawback.