[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] using the same context with multiple canvases



On Sat, 10 Nov 2012, Chris Marrin wrote:
> 
> Hopefully it can be spec'ed to be agnostic about buffering. In the iOS 
> implementation there can be more than two buffers even. Once you're done 
> drawing you hand the buffer to the system and from then on it's out of 
> your hands.

Yeah, the only buffering that gets specced is what you can detect from 
script.


> If I can get a reference to a canvas why can't I get its context so I 
> can use it for drawing?

In the case of having created the context with getContext(), meaning the 
context is stuck in the same thread as the canvas for all time, you can.

In the case of having created a context with a constructor, meaning you 
can end up binding the context to a canvas that sits in another process 
(using a CanvasProxy), my proposal is that this not be allowed because 
otherwise you have to synchronously transfer the entire graphics state or 
somehow make the rendering contexts thread-safe. Doing this is non-trivial 
and seems like something that we shouldn't require of implementations 
without there being a really strong use case.


On Sun, 11 Nov 2012, Gregg Tavares (社~T) wrote:
> 
> If the canvas had it's own drawingbuffer and all the context does is 
> provide an API to draw then you could do this
> 
>   function ThirdPartyLibrary() {
>       var oldContext = canvas.getContext();
>       canvas.setContext(contextThatBelongsToLibrary);
>       drawStuffWithoutEffectingStateOfUsersContext();
>       canvas.setContext(oldContext);
>  }

If there's a reason to do that, then that makes sense. But what reason 
would anyone have to do that? As soon as you call setContext() again, the 
work the library did will be blown away, the way I'm proposing it.

Can you describe the use case that would lead to this?


> I don't see the point i having a backstore associated with each context 
> instead of the canvas. That's not how it works currently in Chrome for 
> canvas 2d or needs to work in any browsers which is one of the things 
> that led us to this design in the first place.

I don't understand how you can do cross-process canvas drawing without it. 
Could you elaborate on how you see this working?


> Each canvas, 2d or 3d is just a framebuffer object (a texture and or 
> depth+stencil). Being able to set which framebuffer object is currently 
> being rendered to by calling setContext or as previously mentioned, 
> gl.bindDrawingBuffer(canvas) is the simplest solution and provides lots 
> of benefits. Including the one above.

But if you do thiss cross-process, how do you ensure that you don't render 
partially-finished content? Unless you have a double-buffer solution of 
some sort or another, I don't understand how you sync with the screen.


On Sun, 11 Nov 2012, Gregg Tavares (社~T) wrote:
> On Sun, Nov 11, 2012 at 8:15 AM, Ian Hickson <ian@hixie.ch> wrote:
> >
> > For the 2D canvas, what I'm writing up in the proposal is that once 
> > you commit (or once the event loop spins) the bitmap associated with 
> > the context gets pushed to the canvas, along with the dimensions of 
> > the bitmap (which affects the canvas element's intrinsic dimensions) 
> > and the origin-clean flag (which affects whether you can read it).
> 
> The problem we were trying to solve was being able to use 1 context to 
> render to multiple canvases. Each canvas is a different size. The 
> typical application that needs this is a 3D editor with multiple 3D 
> views each sizes by the user.
> 
> The reason we need 1 context to be able to write to all of those 
> canvases is because sharing GL resources brings with it a huge amount of 
> state issues. The solution was just being able to use the same context 
> on mulitple canvases.
> 
> But that means each canvas has its own backingstore as each canvas is a 
> different size. We can't use a solution where the backing store is part 
> of context, not the canvas.

Maybe the WebGL and 2D rendering models are just more different than I 
realised.

If you create a WebGLRenderingContext but don't have it assigned to a 
context, does it simply not make any sense to drawImage() that context?

With a CanvasRenderingContext2D, it can be meaningful to reason about 
drawing the output of the context itself, even without a <canvas> backing 
it. Even if you don't have a backing canvas, you can still use 
getPixelData() for example. My plan is to make drawImage() with the 
context take the image from the context's bitmap (whatever is currently 
being drawn, no buffering), and have drawImage() with the canvas or 
CanvasProxy object, in the case of one of these cross-process contexts, 
take the most-recently commit()ted bitmaps.

What should drawImage() do when applied to a disconnected 
WebGLRenderingContext? Should it just not be possible?

We don't need to make the WebGLRenderingContext have a separate bitmap, 
I'm just trying to work out how it should work.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'