On Dec 23, 2009, at 11:20 AM, Gregg Tavares wrote:
Let's step back a moment and correct some misconceptions. First, WebGL doesn't render "to a window". It generates an image which is composited to the page. It might be possible, in some cases, for an implementation to render directly to the page. But to have definitions in the spec that would require rendering directly to the window would make many implementations slow or impossible. Second, WebGL doesn't render to an arbitrary "rectangle of pixels". It renders 3D primitives to a buffer that is specifically designed for 3D rendering. It's conceivable that this buffer could be a generic array of pixels in CPU memory. But such an implementation would be unusably slow. It is really only practical for the buffer to be specially allocated and managed, usually in GPU memory. There are a some platforms which will allow some 2D operations to be interleaved on that buffer using some 2D API. But many platforms don't and those that do almost always incur a performance penalty.
But the performance penalty isn't the worst part. Because some platforms can't do that interleaving, they would need to render 2D to a separate buffer and then the result would have to be composited with the 3D buffer. The spec would have to define the rules for that compositing very clearly. But HTML already defines those compositing rules. Why should we add a requirement to the spec that is more complicated to implement and requires us to define rules that already exist in HTML? If we simply state that mixing 2D and 3D canvas APIs requires you to use separate canvas elements, we keep WebGL implementations simpler and let HTML do the compositing job it's already doing.
And if an author needs to get more control over the compositing operation we already have the ability to use a separate 2D rendered canvas as a texture where an author can use any WebGL shaders or operations to add those 2D pixels to the 3D scene.
It seems clear how Canvas was created. The WebKit team wanted to have a way to draw simple 2D shapes into a pixel buffer and composite the result with the HTML page. They chose a single function call (getContext) to establish the connection between the API and the Canvas element because it was the simplest. Having a 'type' attribute would have been no good because what happens if you change it programatically? Keeping it all in imperative API also fit better with the rest of the programmatic model.