[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Some WebGL draft feedback

On Dec 21, 2009, at 11:29 AM, Gregg Tavares wrote:

On Sun, Dec 20, 2009 at 8:50 PM, Mark Callow <callow_mark@hicorp.co.jp> wrote:
We should not do anything to encourage the mixing of 2D & 3D rendering via different APIs. The required synchronization between the APIs will kill performance. People should be encouraged to draw everything in their canvas with WebGL.

I think the issue is rather that IF canvas is supposed to allow multiple contexts, then IF WebGL cares about performance it will need its own tag. It's certainly arguable that if the intent of <canvas> is a "rectangle of pixels with multiple contexts to effect those pixels" then if WebGL doesn't want to follow those rules it does not belong in the canvas tag.

I'm not sure what "rules" you're referring to. There are currently no stated rules for the relationship between the canvas (the rectangle of pixels being composited with the page) and the context (the set of API calls and state that control the marking of that rectangle of pixels). Part of our discussions with the HTML 5 group is working out these rules. There are two ways to think of that relationship. There's the "all at once" model, where you can interleave calls from different APIs. This is what several people on this list have cautioned against because of synchronization and other issues. But there's also the "one at a time" model where you create a context with one API, render with it, then toss that context and create one with a different API using the same pixels. That model is a lot more tractable.

I think I understand your desire to use multiple APIs on the same pixels, and I think the second model fits that desire. I think it could be well defined. You just have to define what the pixels from one context's API look like when you change to the new API. For instance, when you go from a WebGL context to a 2D context it would be easy to keep the color pixels around. But what happens to the depth buffer? The semantics can be complex so I think we should save this feature for the future.

 In the meantime we have a well defined one-to-one relationship between canvas elements and contexts. If you want to combine them you simply render to separate canvas elements and then use texImage2D (or the equivalent in the 2D API) and combine them using the rules of the current API. This requires copying the pixels, but I think that's sufficient in the first release.

I think another useful conversation is how to share the assets (textures, buffers, programs) between separate contexts of the same type. It would be extremely useful to share assets between WebGL contexts. This is easy to do on Mac, and I think there are versions of Windows where it is possible, too. The semantics can be described, but they can be complex, so I think we should save this feature for a future version as well.