On Thu, Jun 14, 2012 at 8:42 PM, Mark Callow <email@example.com>
On 14/06/2012 17:00, Gregg Tavares (社用)
That's was wrong with Glenn's suggestion. Many apps
will break and authors won't know until they test on
HD-DPI displays that their code is wrong. This is already
true for all 2600+ examples on glsl.heroku.com
several more on the shader
since they all do effectively this math
vec2 texCoord = gl_FragCoord.xy / resolution
Where resolution = canvas.width, canvas.height
That produces values that go from 0.0 to 1.0 across the
That code is broken.
With or without your proposed changes it will
fail when the drawing buffer size != canvas size. Currently they can
be not equal either because css pixels != device pixels or because
the implementation could not allocate a drawing buffer the size of
the canvas. With your proposed change, the latter remains a
What is generally hiding the brokenness now is that the viewport is
clamped to MAX_VIEWPORT_DIM which is commonly the limit of the
allocation for very large canvases. This is not always the case. A
implementation may choose to allocate a drawing buffer <
MAX_VIEWPORT_DIM for memory or performance reasons or it may be
forced to because MAX_RENDERBUFFER_SIZE is less than
The code is broken. It needs to be fixed to calculate resolution
I'm not sure what your point is. Yes, the code could be written better to deal with exceptional cases but it will rarely hit those cases. Are you suggesting a change in the spec that would make code magically work?
The intent of the spec as written is
1) Give the developer exactly what they ask for. If they ask for 640x480 they get 640x480
2) Deal with OpenGL implementation limits as best as possible.
The solution chosen was to clamp the drawingBuffer width and height to MAX_VIEWPORT_DIMS or smaller if need be.
This means for most uses of WebGL apps will do exactly what the author intended. They ask for a certain size canvas, they'll get it.
In some exceptional cases (windows stretched across multiple monitors for example) they may hit the limits of the user's GPU and get incorrect results. They're app won't crash though. So for example if you have an app that auto-sizes a canvas to the size of the window and if the user sizes the window too large the app won't suddenly die. If the user sizes the window back down everything should be good. No work lost. If the author cares they can fix their code by using drawingBufferWidth and drawingBufferHeight in the appropriate places or by limiting the size they allow a canvas to size.
We can see this working today. That's the spec today and there are thousands of WebGL pages working just fine. Of the sub-percentage that allow the canvas to stretch and update the backbuffer to match, few if any of them are doing it 100% correctly but because the breaking case is exceptional (not many people have multiple monitors and even those that do often have a MAX_VIEWPORT_DIMS limit larger than the sub of the resolution of the multiple monitors combined, it's rare to hit this case.