[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] How to set a canvas backing store to display units?

On Sat, Jun 16, 2012 at 8:34 PM, Gregg Tavares (社用) <gman@google.com> wrote:
I must be missing something. That attribute would break many samples.
No, automatic backing store size selection by the browser according to resolution conversion from CSS pixels to backing store size would break many samples.
As it is now you get a backing store the size you ask for and things "just work". There are no unknowns, there is nothing you need to be aware of in your app.
It is my understanding that this is not true, browsers will already give you differently sized backing stores then the ones you specified by canvas.width/canvas.height right?
1) Don't use gl_FragCoord when rendering to the backbuffer
Using gl_FragCoord is fine either if you do not set "allowHighRes" or if you are setting your conversion factors from drawingBufferWidth/Height 
2) Don't use gl.LINES because the thickness will be wrong
Line thickness is severely limited, trying to make arbitrarily big lines will fail on any account, however, drawing lines will be correct either if you do not set "allowHighRes" or you compute your line width as some ratio of drawingBufferWidth/drawingBufferHeight
3) Don't use gl.POINTS because the size will be wrong
Arbitrarily sized points will fail anyways as you are limited to 64 pixels at best. However point size will be correct either if you do not set "allowHighRes" or you compute your line width as some ratio of drawingBufferWidth/drawingBufferHeight
4) Know when to use canvas.width, canvas.height and when to use gl.drawingBufferWidth and gl.drawingBufferHeight
Use canvas.width/height when you're not setting "allowHighRes", use gl.drawingBufferWidth/Height otherwise.
Write an app and forget any of those rules and the app which worked on your low-res machine will be broken if you set the "allowHighRes" option and your app is on an HD-DPI machine. I don't see how that's acceptable that you have to test even the simplest program to know whether it works or not.
Testing even the simplest apps to see weather they work or not is already the case. However, "allowHighRes" does not expose you to more testing effort if you refuse to "allowHighRes", conversely, if you do "allowHighRes" manually say by setting canvas.width = canvas.style.width*2 then you are exposed to the same testing risk, so it does not introduce additional risk.
As for #4. Many apps do post processing to do things like blur, depth-of-field, glow, screen space ambient occlusion, etc... These are the apps that are generally spending the most effort to look beautiful. To do this they render to a texture. If the browser is lying to them about the actual size of the backbuffer telling them it's a 1/2 the res it really is, then they'll allocate a 1/2 res texture and which in the final step will be rendered to the 2x backbuffer totally defeating the purpose of the "allowHighRes" flag. Now of course you might say "well they should have allocated the texture with drawingBufferWidth and drawingBufferHeight instead of canvas.width and canvas.height" but that's back to my point. If they don't have an HD-DPI display they'll have no way to know there app is broken because it will work just fine on their low-res machine.
The intent to "allowHighRes" is not lying to users. Querying the backing store size would be done with drawingBufferWidth/Height.
The goal of allowHighRes it to magically make apps high res but as we've pointed out that's impossible given the way OpenGL works. Is there some reason these issues keep getting ignored?
This is not the intent. The intent is to let the browser choose a native resolution backing store if you let him, and the default being "no". The only thing you have to remember is that you need to query your backing store size via drawingBufferWidth/Height and perform the appropriate conversion in/out when converting things like mouse coordinates and the like (which mostly is not an issue as you convert to clip-space anyway before performing inverse projection, so it's not an issue most people would naturally run into using best practise code).
On top of that there's no reason to implement allowHighRes in the browser when a simple _javascript_ wrapper on top of the current API can provide most if not all of that same functionality if you want it. I suggest the advocates of "allowHighRes" prototype it in _javascript_. If you want take the source code for the WebGL Inspector and inject your suggested API behavior changes in as a test.
The _javascript_ shunt to do it would rely on currently unspecified, not standardized native resolution parameters only present in certain browsers on certain operating systems.