On Sun, 11 Nov 2012, James Robinson wrote:Well the other option is just to have no way to read a bitmap from an
> On Sat, Nov 10, 2012 at 10:57 PM, Ian Hickson <email@example.com> wrote:
> > Between the time that the timeout fires and the time the timeout ends,
> > which should hopefully be noticeable unless your computer is quite new
> > (in which case just up the iteration count, thanks!), you can see the
> > canvas on the screen is still completely transparent, but the bitmap
> > of the canvas/context itself (they're shared in this case) is clearly
> > not showing the same thing, since after the loop has finished, you see
> > two squares, despite only one having been drawn -- the second is a
> > copy from this "off-screen" buffer to itself.
> What we actually do (in Chrome) is slightly more nuanced. In general,
> we use a single buffer for 2d canvas and accumulate draw calls as they
> are made but defer rasterization as late as possible. For this case,
> since it's doing getImageData(), we are forced to actually rasterize to
> an offscreen buffer while script is running. When this happens, we
> simply avoid compositing new frames until the script yields. This
> optimizes for memory use at the cost of performance for pages that call
> getImageData(), but getImageData() is horribly slow for any GPU-backed
> implementation anyway. We may double or N-buffer canvas in the future
> but it isn't strictly necessary for correctness or for performance in
> the general case.
independently created context unless the context has been commit()ted
(either directly via commit() or indirectly via the event loop spinning),
and thus have drawImage() and getImageData() in those cases always grab
data from the actual backing store. This means we'd have one bitmap per
canvas, plus one per non-bound context, the latter of which would go away
once you bound the context.
Or we could just have one per canvas, and an unbound context is unusable;
then, to draw off-screen in a worker, one could create a virtual canvas
to which you can bind a context.
Today, when does a WebGLRenderingContext "commit" or "draw" to the
> For WebGL, we have multiple buffers depending on the context parameters
> and system capabilities. The default on most systems is antialiased:
> true and preserveDrawingBuffer: false in which case we render into a
> multisampled renderbuffer and alternate resolving it in to two textures
> for a total of 3 buffers but memory use slightly higher than 3 textures.
> I think it makes a lot of sense to make it easy for a single WebGL
> context to drive multiple canvas buffers since WebGL context state is so
> heavy compared to a 2d context.
currently assigned canvas? Is it just when the event loop spins, or is
there an explicit "paint a frame now" method?
When would you want one canvas done one way and another a different way,
On Mon, 12 Nov 2012, Mark Callow wrote:
> Only alpha and premultiplied are truly canvas attributes. Alpha because
> it potentially affects the storage for the canvas pixels, although more
> than likely both RGB & RGBA canvases use 32 bpp, and premultiplied
> because it affects how those pixels are interpreted, as does alpha.
> Except for preserved, the rest of the attributes involve additional
> memory blocks that could easily be reused for drawing to a different
> canvas. Whether you need preserved depends on your drawing algorithms so
> it should rightly be considered a context attribute.
> We could probably argue until the cows come home about whether to make
> depth, stencil and anti-alias canvas or context attributes. I suggest
> the best approach is to consider use cases and how well assignment as
> either fits those use cases.
for the same rendering context?
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'