[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] How to set a canvas backing store to display units?





On Fri, Jun 15, 2012 at 2:51 PM, Florian Bösch <pyalot@gmail.com> wrote:
On Fri, Jun 15, 2012 at 11:38 PM, James Robinson <jamesr@google.com> wrote:
Canvas 2d is (mostly) a vector API, so you can write canvas 2d code that does not care how many pixels are in the backing store and it will Just Work.

That is definitely not true for OpenGL or WebGL - in general it is not possible to write code that works correctly that does not care about how many pixels are actually in the backing store.  I think this is the fundamental difference between 2d canvas and WebGL and not something we can gloss over with an option or attribute.
I think what Glenn tries to make sure with that attribute is to avoid 99% of webgl samples breaking on high dpi devices because they're not using drawingBufferWidth/Height, but make it possible to "opt-in" to high-dpi devices true resolution, right? So that attribute wouldn't "gloss over" the difference, but retain the semantic that most people have coded against so far, while letting them gradually ease into doing things right.

I must be missing something. That attribute would break many samples.

As it is now you get a backing store the size you ask for and things "just work". There are no unknowns, there is nothing you need to be aware of in your app. 

With that attribute you'll have no idea if your app works on HD-DPI displays unless you actually test it on an HD-DPI display. Since we can't fix the device pixel issues in the fragment shaders it's up to the developer to try to follow the rules and hope they don't get any wrong. They'll have to remember

1) Don't use gl_FragCoord when rendering to the backbuffer
2) Don't use gl.LINES because the thickness will be wrong
3) Don't use gl.POINTS because the size will be wrong
4) Know when to use canvas.width, canvas.height and when to use gl.drawingBufferWidth and gl.drawingBufferHeight

Write an app and forget any of those rules and the app which worked on your low-res machine will be broken if you set the "allowHighRes" option and your app is on an HD-DPI machine. I don't see how that's acceptable that you have to test even the simplest program to know whether it works or not.

As for #4. Many apps do post processing to do things like blur, depth-of-field, glow, screen space ambient occlusion, etc... These are the apps that are generally spending the most effort to look beautiful. To do this they render to a texture. If the browser is lying to them about the actual size of the backbuffer telling them it's a 1/2 the res it really is, then they'll allocate a 1/2 res texture and which in the final step will be rendered to the 2x backbuffer totally defeating the purpose of the "allowHighRes" flag. Now of course you might say "well they should have allocated the texture with drawingBufferWidth and drawingBufferHeight instead of canvas.width and canvas.height" but that's back to my point. If they don't have an HD-DPI display they'll have no way to know there app is broken because it will work just fine on their low-res machine.

Under the current spec problems #1, #2 and #3 disappear. #4 only comes up in extreme and rare cases and is safely ignored (as it has been up until now and yet we have 1000s of working WebGL apps)

The goal of allowHighRes it to magically make apps high res but as we've pointed out that's impossible given the way OpenGL works. Is there some reason these issues keep getting ignored?

On top of that there's no reason to implement allowHighRes in the browser when a simple _javascript_ wrapper on top of the current API can provide most if not all of that same functionality if you want it. I suggest the advocates of "allowHighRes" prototype it in _javascript_. If you want take the source code for the WebGL Inspector and inject your suggested API behavior changes in as a test.