[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] double-buffering and back buffer preservation
- To: Vladimir Vukicevic <email@example.com>
- Subject: Re: [Public WebGL] double-buffering and back buffer preservation
- From: "Gregg Tavares (wrk)" <firstname.lastname@example.org>
- Date: Mon, 15 Nov 2010 14:34:05 -0800
- Cc: public webgl <email@example.com>
- Dkim-signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=google.com; s=beta; t=1289860448; bh=K7KvQJdUUIwivNyTVPGs6P3qBTA=; h=MIME-Version:In-Reply-To:References:Date:Message-ID:Subject:From: To:Cc:Content-Type; b=TWx1+l0zultbdQOy/ulAL/DJJjfO6WZKSON7n+DChYn1T5FDmudSSYjWe07nqQkwC rzC02v6kglZfAzT2HEJ+g==
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=beta; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=fBeT8aXnim97CQ+9aWQlxikPLtzBBB++ZBdgtMIF1RQ=; b=kqK7nzH0j7/DvZAJk67JJ8D11Mjp2B10d81aTshqRoyJvUvEu8vmqhGxoehQ0V8p/9 Adcu1B4BBanczgkcrBuw==
- Domainkey-signature: a=rsa-sha1; c=nofws; d=google.com; s=beta; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=Vh7bCYLK/zMy95jN/ZU7vKbOArf/FJ0AlRZeAeWd+ffzzx9OaW3VQgdIYFdSkHtl8w 2KWvxNxWO73Q7s7VK9Jg==
- In-reply-to: <2033847348.354199.1289858189572.JavaMail.firstname.lastname@example.org>
- References: <630513332.347257.1289777622156.JavaMail.email@example.com> <2033847348.354199.1289858189572.JavaMail.firstname.lastname@example.org>
- Sender: email@example.com
On Mon, Nov 15, 2010 at 1:56 PM, Vladimir Vukicevic <firstname.lastname@example.org>
One of the major issues that came up during the last f2f was that the current canvas model is not the most efficient for implementing on hardware, especially on mobile hardware. The current model was based on how the 2D canvas works: drawing happens during content JS script, and as soon as control is returned to the browser, it's supposed to be presented for display. Content can also read back the current contents of the displayed image (in 2D canvas, via getImageData; in WebGL, via readPixels; and in both via toDataURL).
However, most 3D hardware doesn't really want to work like that -- they're optimized for double-buffering, where you draw a scene to the back buffer, and then swap buffers; after a swap, the new back buffer contains garbage. Implementing the current canvas semantics is not a big issue on the desktop, because there is cpu/gpu/bandwidth to spare, but it's a pretty big deal on mobile.
We identified a few different options:
1) Do nothing. Leave things as it is. There would be fairly significant overhead that all apps would pay, even if they never want to call readPixels or toDataURL.
2) Add explicit double-buffering to WebGL canvases, and add an explicit present() call. This complicates things for developers, because they need to then actually make this present() call, and can potentially result in higher memory usage due to some implementations needing to keep around both a front and back buffer, where before they could be effectively single-buffered.
3) Add implicit double-buffering to canvas. Follow the same semantics as canvas does currently -- swap happens whenever control is returned to the browser -- but always enforce an uninitialized/cleared back buffer after each swap.
4) Like #2 and #3, by adding a context attribute that allows the author to choose which they would want.
5) Like #1 (do nothing), but add a context attribute for the author to indicate whether readback from the window/canvas buffer will ever be done. If it's set to true, then dimply disallow readPixels/toDataURL when framebuffer 0 is bound (or have them always return all black).
We identified a few different use cases that should be considered. Printing and screenshots were two, along with using a WebGL canvas as a texImage2D source, and using a WebGL canvas as a 2D canvas drawImage source.
Note that even with implicit double-buffering, it's possible to use a FBO to build up a scene across many events and only draw it to the drawbuffer once (just like you can do today, with #1). Adding context attributes complicates the implementations, potentially significantly, so I'd like to avoid 4 and 5.
I'd lean towards #3, but what are the list's thoughts on this?
I'm fine with 3 with the caveats that I mentioned before. Namely that toDataURL, readPixels, drawImage, texImage2D(canvas3d, ...) all work as expected. Namely that reading the canvas through any of those calls gives you the contents what you see displayed until first draw call after a swap. I mentioned 2 ways to implement that. Either don't issue the clear until the first draw call after a swap or, read from the display buffer after a swap, read from the draw buffer after a draw,
You are currently subscribed to email@example.com.
To unsubscribe, send an email to firstname.lastname@example.org with
the following command in the body of your email: