The proposed way to determine support for stereo default framebuffer is to query the actual context creation parameter. An application that wants to render to WebVR needs to create a WebGL context regardless of if the stereo attribute is supported or not, so I don’t see a very strong use case for another mechanism for querying stereo support.
In the current proposal stereo can’t be toggled on and off after context creation, but monoscopic contexts like the browser page compositor only display the left buffer. So the application can simply elect not to render to the right buffer at times when only one buffer is being displayed.
Florian, by rendering multiple views into one framebuffer, do you mean more than 2 views? The extension does include the possibility to render into texture arrays to cover some use cases like this. The default framebuffer only supports 2 buffers, though, because it’s been designed with the current WebVR spec in mind, which also hard-codes 2 buffers.
With regards to version numbering, I don’t think that there’s grounds for requiring a different context id for getContext(). Writing an entirely separate spec document for the version with stereo also seems like it would only complicate things, so I hope we can keep it in the WebGL 1 spec document. Other than that I’m happy with the version with stereo to be called either 1.0.x or 1.1.
Adding functionality like this means it is no longer WebGL 1.0. We would have to create a WebGL 1.1.
Is there any way for an application to determine that the implementation supports a stereo/multiview default framebuffer absent attempting to create the context with one? This question is valid regardless of version numbering.