On Sat, Nov 10, 2012 at 10:57 PM, Ian Hickson <firstname.lastname@example.org>
On Sat, 10 Nov 2012, Boris Zbarsky wrote:
> On 11/10/12 3:15 PM, Ian Hickson wrote:Chrome, Firefox, and Opera do what the spec says:
> > > In the current model there is a one-to-many relationship between a
> > > canvas and canvas contexts. In particular, it's possible to grab
> > > both a 2d context and a webgl context for the same canvas.
> > Not according to the spec, not for a long time. "webgl" and "2d" are
> > marked as incompatible, so once you get one, you can never get the
> > other for the same canvas element.
> Oh, hmm. I thought the spec said to throw away the binding to one
> context and return a new one with a new backing store... That's
> certainly what some UAs do last I tested, but maybe they've changed
(Note: single-page version currently has different text, it's the
work-in-progress for the proposal I'm writing.)
Look at this test (warning, has a loop with 10000 getImageData/drawImage
> > Right now as far as I can tell you need two copies of the bitmap per
> > canvas: one for drawing on, and one for painting to the screen, so
> > that you don't get any flicker.
> I don't believe this is true. For example, if your canvas is bigger
> than the viewport, then you only need one copy of the entire backing
> store; the part that you have to put on the screen can be just the
> smaller bit that fits inside the viewport.
calls, so can take a long time to render):
Between the time that the timeout fires and the time the timeout ends,
which should hopefully be noticeable unless your computer is quite new
(in which case just up the iteration count, thanks!), you can see the
canvas on the screen is still completely transparent, but the bitmap of
the canvas/context itself (they're shared in this case) is clearly not
showing the same thing, since after the loop has finished, you see two
squares, despite only one having been drawn -- the second is a copy from
this "off-screen" buffer to itself.
What we actually do (in Chrome) is slightly more nuanced. In general, we use a single buffer for 2d canvas and accumulate draw calls as they are made but defer rasterization as late as possible. For this case, since it's doing getImageData(), we are forced to actually rasterize to an offscreen buffer while script is running. When this happens, we simply avoid compositing new frames until the script yields. This optimizes for memory use at the cost of performance for pages that call getImageData(), but getImageData() is horribly slow for any GPU-backed implementation anyway. We may double or N-buffer canvas in the future but it isn't strictly necessary for correctness or for performance in the general case.
For WebGL, we have multiple buffers depending on the context parameters and system capabilities. The default on most systems is antialiased: true and preserveDrawingBuffer: false in which case we render into a multisampled renderbuffer and alternate resolving it in to two textures for a total of 3 buffers but memory use slightly higher than 3 textures.
I think it makes a lot of sense to make it easy for a single WebGL context to drive multiple canvas buffers since WebGL context state is so heavy compared to a 2d context.
Ian Hickson U+1047E )\._.,--....,'``. fL
U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
You are currently subscribed to email@example.com
To unsubscribe, send an email to firstname.lastname@example.org
the following command in the body of your email: