On Wed, Jun 13, 2012 at 6:19 PM, Gregg Tavares (çç) <firstname.lastname@example.org>
Really? There's plenty of examples of breaking changes to web apis or deprecated features that have now been removed from browsers that pages were using.
What breaking changes have been made that intentionally broke almost every user of an API?Â (Excluding those made for security reasons; that's the only thing that tends to trump web compatibility.)
The spec does specify a specific usage. People aren't following the spec. Those are both true. Whether or not they need to start following it or the spec needs to change is up for debate.
2.3 says normatively:
> Upon creation of the WebGL context, the viewport is initialized to a rectangle with origin at (0, 0) and width and height equal to (canvas.width, canvas.height).
This tells me that the viewport (window coordinates) are in the same units as the canvas: CSS pixels.
Blending operations are affected. A 320x200 texture blended to 640x400 backbuffer will produce different results than a 320x200 texture blended to a 320x200 backbuffer. WebGL is not just a visual API so breaking this contract on results is not acceptable IMO.
Blending operations are not affected as far as WebGL is concerned, eg. in how they affect the drawing buffer.
And how would define this mapping? There's no guarantee the mappings between CSS pixels and device pixels is an even number or even the same aspect ratio. This would effectively make copyTexImage2D a vastly unreliable function giving all kinds of unexpected results.
It's nontrivial, but I think you're grossly exaggerating the difficulty.Â It's just an image resampling algorithm.
ÂWebGL has always rendered device pixels to its backbuffer. That's a separate issue from how they get displayed through compositing and css transforms. The idea is the developer as 100% control of the resolution of the backingbuffer and anything else in WebGL and after that CSS takes that backing buffer of a resolution the developer choose and then composites it however it wants.Â
I don't know where this is coming from.Â WebGL explicitly allows the browser to use a smaller backing store than the canvas element; by design the developer *does not have* 100% control over the resolution of the backing store.
There's way more effected than just those functions. For example gl_PointSize is set in device units. It's set inside a vertex shader by math provided the user. No easy way to insert a multiply by CSS pixels to get the points to be the correct size.
How about all the samples at http://glsl.heroku.com
? These all use gl_FragCoord which is a value provided by the GPU given in device pixels. They then usually divide that by a user supplied "resolution" which is also expected to be in device pixels so that dividing gl_FragCoord.xy / resolution provides a value from 0.0 to 1.0 across the backbuffer
I wouldn't expect these to cause major problems, but like I said, there's always the fallback of making high-resolution backing stores opt-in if it causes too many problems.Â I definitely *don't* think things like GLSL variables (eg. gl_PointSize) should be in CSS units; they should stay as they are, in backing store pixels.
On Wed, Jun 13, 2012 at 8:16 PM, Gregg Tavares (çç) <email@example.com>
already broken today. Find a GPU with a 2048 or 4096 pixel
MAX_TEXTURE_SIZE limit. Attach a second and or 3rd monitor. Stretch the
window width across the monitors until its width is > than the limit.
See the bug.
With the approach I proposed, this code will work fine.Â This is how almost
every WebGL app today is written.Â Trying to get every app to change this is only going to result in fragmentation, with *both* being common, which is far worse.
The canvas.width and canvas.height have to make a backbuffer in those number of pixels.
If you ask for canvas.width = 10000,
canvas.height = 10000, it's not *possible* to make a backbuffer at that