[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Some WebGL draft feedback

On Dec 23, 2009, at 11:20 AM, Gregg Tavares wrote:

...3) Switch to a <webgl> tag instead of canvas.

I suspect this suggestion will be unpopular but it seems to me that if canvas has a model and webgl is not willing to support that model than it doesn't belong in canvas.

Canvas DOESN'T have a model. In the 3 years since Opera created their proprietary API have they gone to the W3C to propose defining multiple contexts in the Canvas element? If so, then we have a basis for discussion. If not, we need to do that definition. My strong recommendation is to disallow multiple contexts.

I don't follow the logic here? Which OS doesn't allow someone to use any API they want to draw to a window? Which system doesn't allow you to use any code you want to draw to a rectangle of pixels? Maybe I'm reading something into it but it seems like allowing multiple contexts as the normal thing to do and not allowing multiple contexts is only being advocated because it's hard for WebGL which doesn't seem like it should be the deciding factor. Rather then limit the usefulness of canvas by making it not allowed to have multiple contexts because of WebGL, if WebGL can't share then it doesn't belong in canvas.

Let's step back a moment and correct some misconceptions. First, WebGL doesn't render "to a window". It generates an image which is composited to the page. It might be possible, in some cases, for an implementation to render directly to the page. But to have definitions in the spec that would require rendering directly to the window would make many implementations slow or impossible. Second, WebGL doesn't render to  an arbitrary "rectangle of pixels". It renders 3D primitives to a buffer that is specifically designed for 3D rendering. It's conceivable that this buffer could be a generic array of pixels in CPU memory. But such an implementation would be unusably slow. It is really only practical for the buffer to be specially allocated and managed, usually in GPU memory. There are a some platforms which will allow some 2D operations to be interleaved on that buffer using some 2D API. But many platforms don't and those that do almost always incur a performance penalty. 

But the performance penalty isn't the worst part. Because some platforms can't do that interleaving, they would need to render 2D to a separate buffer and then the result would have to be composited with the 3D buffer. The spec would have to define the rules for that compositing very clearly. But HTML already defines those compositing rules. Why should we add a requirement to the spec that is more complicated to implement and requires us to define rules that already exist in HTML? If we simply state that mixing 2D and 3D canvas APIs requires you to use separate canvas elements, we keep WebGL implementations simpler and let HTML do the compositing job it's already doing.

And if an author needs to get more control over the compositing operation we already have the ability to use a separate 2D rendered canvas as a texture where an author can use any WebGL shaders or operations to add those 2D pixels to the 3D scene.

I'd even go so far as to say that if the designers of canvas wanted to enforce only 1 context they would have designed it to make it impossible to specify more than one such as

<cavnas type="2d" id="foo"/>

var api = document.getElementById("foo").getApi();

Instead the API clearly wasn't designed to enforce a single context which suggests that either the creators of canvas choose a poor design or that they intended multiple contexts. I happen to be assuming the later because it's actually extremely useful to allow multiple contexts.

It seems clear how Canvas was created. The WebKit team wanted to have a way to draw simple 2D shapes into a pixel buffer and composite the result with the HTML page. They chose a single function call (getContext) to establish the connection between the API and the Canvas element because it was the simplest. Having a 'type' attribute would have been no good because what happens if you change it programatically? Keeping it all in imperative API also fit better with the rest of the programmatic model.