[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] using the same context with multiple canvases



I like the idea of a separate DrawingBuffer object, though it does raise some interesting questions: could a single drawing buffer be attached to multiple canvases? Also, could a drawing buffer be the target of multiple drawing contexts simultaneously? Both scenarios seem like they would have well defined behavior, but perhaps there's some unforseen complications?

In any case, that's probably my favorite idea thus far. It also doesn't hurt that it clings a little closer to the structure of Direct X 10/11, which would make future ANGLE implementations straightforward.

On Nov 11, 2012 11:02 PM, "Gregg Tavares (社用)" <gman@google.com> wrote:



On Mon, Nov 12, 2012 at 3:44 PM, Ian Hickson <ian@hixie.ch> wrote:
On Sun, 11 Nov 2012, James Robinson wrote:
> On Sat, Nov 10, 2012 at 10:57 PM, Ian Hickson <ian@hixie.ch> wrote:
> >
> > Between the time that the timeout fires and the time the timeout ends,
> > which should hopefully be noticeable unless your computer is quite new
> > (in which case just up the iteration count, thanks!), you can see the
> > canvas on the screen is still completely transparent, but the bitmap
> > of the canvas/context itself (they're shared in this case) is clearly
> > not showing the same thing, since after the loop has finished, you see
> > two squares, despite only one having been drawn -- the second is a
> > copy from this "off-screen" buffer to itself.
>
> What we actually do (in Chrome) is slightly more nuanced.  In general,
> we use a single buffer for 2d canvas and accumulate draw calls as they
> are made but defer rasterization as late as possible.  For this case,
> since it's doing getImageData(), we are forced to actually rasterize to
> an offscreen buffer while script is running.  When this happens, we
> simply avoid compositing new frames until the script yields.  This
> optimizes for memory use at the cost of performance for pages that call
> getImageData(), but getImageData() is horribly slow for any GPU-backed
> implementation anyway.  We may double or N-buffer canvas in the future
> but it isn't strictly necessary for correctness or for performance in
> the general case.

Well the other option is just to have no way to read a bitmap from an
independently created context unless the context has been commit()ted
(either directly via commit() or indirectly via the event loop spinning),
and thus have drawImage() and getImageData() in those cases always grab
data from the actual backing store. This means we'd have one bitmap per
canvas, plus one per non-bound context, the latter of which would go away
once you bound the context.

Or we could just have one per canvas, and an unbound context is unusable;
then, to draw off-screen in a worker, one could create a virtual canvas
to which you can bind a context.

How about

   db = new DrawingBuffer(creatationAttrubites);

for gl

   gl.bindDrawingBuffer(db);

for canvas 2d???

   ctx.setDrawingBuffer(db);

For canvas

  canvas.setDrawingBuffer(db);

Would that work? 


 


> For WebGL, we have multiple buffers depending on the context parameters
> and system capabilities.  The default on most systems is antialiased:
> true and preserveDrawingBuffer: false in which case we render into a
> multisampled renderbuffer and alternate resolving it in to two textures
> for a total of 3 buffers but memory use slightly higher than 3 textures.
>
> I think it makes a lot of sense to make it easy for a single WebGL
> context to drive multiple canvas buffers since WebGL context state is so
> heavy compared to a 2d context.

Today, when does a WebGLRenderingContext "commit" or "draw" to the
currently assigned canvas? Is it just when the event loop spins, or is
there an explicit "paint a frame now" method?

It's currently supposed to be when the current event exits IMO but the spec is ambiguous saying something about when it's compositied which means it's browser specific if other events come by before compositing happens. 
 


On Mon, 12 Nov 2012, Mark Callow wrote:
>
> Only alpha and premultiplied are truly canvas attributes. Alpha because
> it potentially affects the storage for the canvas pixels, although more
> than likely both RGB & RGBA canvases use 32 bpp, and premultiplied
> because it affects how those pixels are interpreted, as does alpha.
> Except for preserved, the rest of the attributes involve additional
> memory blocks that could easily be reused for drawing to a different
> canvas. Whether you need preserved depends on your drawing algorithms so
> it should rightly be considered a context attribute.
>
> We could probably argue until the cows come home about whether to make
> depth, stencil and anti-alias canvas or context attributes. I suggest
> the best approach is to consider use cases and how well assignment as
> either fits those use cases.

When would you want one canvas done one way and another a different way,
for the same rendering context?

I can imagine wanting antialias: true for my 3D display and antialias: false for my texture selector
Similarly I can imagine wanting RGBA for my 3D display and RGB for my texture selector.
I need depth on my 3D display but not on my texture selector


 

--
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'