[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] How to set a canvas backing store to display units?

On Wed, Jun 13, 2012 at 6:26 PM, Glenn Maynard <glenn@zewt.org> wrote:
On Wed, Jun 13, 2012 at 6:19 PM, Gregg Tavares (社用) <gman@google.com> wrote:
Really? There's plenty of examples of breaking changes to web apis or deprecated features that have now been removed from browsers that pages were using.

What breaking changes have been made that intentionally broke almost every user of an API?  (Excluding those made for security reasons; that's the only thing that tends to trump web compatibility.)

The spec does specify a specific usage. People aren't following the spec. Those are both true. Whether or not they need to start following it or the spec needs to change is up for debate.

2.3 says normatively:

> Upon creation of the WebGL context, the viewport is initialized to a rectangle with origin at (0, 0) and width and height equal to (canvas.width, canvas.height).

This tells me that the viewport (window coordinates) are in the same units as the canvas: CSS pixels.

This is a bug in the spec. Both the spec here and the example were written before drawingBufferWidth and drawingBufferHeight were added to the spec to deal with the MAX_TEXTURE_SIZE limit. We forgot to fix the spec to say that gl.viewport is set to gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight);

Blending operations are affected. A 320x200 texture blended to 640x400 backbuffer will produce different results than a 320x200 texture blended to a 320x200 backbuffer. WebGL is not just a visual API so breaking this contract on results is not acceptable IMO.

Blending operations are not affected as far as WebGL is concerned, eg. in how they affect the drawing buffer.

And how would define this mapping? There's no guarantee the mappings between CSS pixels and device pixels is an even number or even the same aspect ratio. This would effectively make copyTexImage2D a vastly unreliable function giving all kinds of unexpected results.

It's nontrivial, but I think you're grossly exaggerating the difficulty.  It's just an image resampling algorithm.

 WebGL has always rendered device pixels to its backbuffer. That's a separate issue from how they get displayed through compositing and css transforms. The idea is the developer as 100% control of the resolution of the backingbuffer and anything else in WebGL and after that CSS takes that backing buffer of a resolution the developer choose and then composites it however it wants. 

I don't know where this is coming from.  WebGL explicitly allows the browser to use a smaller backing store than the canvas element; by design the developer *does not have* 100% control over the resolution of the backing store.

There's way more effected than just those functions. For example gl_PointSize is set in device units. It's set inside a vertex shader by math provided the user. No easy way to insert a multiply by CSS pixels to get the points to be the correct size.
How about all the samples at http://glsl.heroku.com? These all use gl_FragCoord which is a value provided by the GPU given in device pixels. They then usually divide that by a user supplied "resolution" which is also expected to be in device pixels so that dividing gl_FragCoord.xy / resolution provides a value from 0.0 to 1.0 across the backbuffer

I wouldn't expect these to cause major problems,

What do you mean with won't cause major problems? If the canvas is specified in CSS units but the browser chooses the backing store in device units then every sample on http://glsl.heroku.com breaks on an HD-DPI system because gl.FragCoord will be in device units and "resolution" will be in CSS units (since it's set by canvas.width and canvas.height) currently.

Even if you got every sample out there to change to specifying resolution using drawBufferWidth and drawBufferHeight (something you pointed out no one is doing) there'd still be issues. 


   void main( void ) {
float r = mod(gl_FragCoord.x / 20.0, 2.0) < 1.0 ? 0.0 : 1.0; 
float g = mod(gl_FragCoord.y / 20.0, 2.0) < 1.0 ? 0.0 : 1.0; 
gl_FragColor = vec4(r, g, 0, 1);

This shader will generate a 20x20 plaid pattern. It has no input to fix. Having it suddenly go 1/4 size on a HD-DPI display doesn't seem acceptable.

Similarly, having every particle in three.js suddenly draw at 1/2 size also doesn't seem acceptable.

Or having all the lines in MapsGL suddenly become 1/2 as thick.
but like I said, there's always the fallback of making high-resolution backing stores opt-in if it causes too many problems. 

This seems like the only viable solution but you don't have "opt in". WebGL already supports it today. You just set the backbuffer to the size you want like you do now and let CSS scale it also like you do now. There are plenty of samples that already set the size of the backbuffer to something different than the size it's displayed.

If you want it 1x1 with an HD-DPI display you set it 

     canvas.width = desiredWidth * window.devicePixelRatio
     canvas.height = desiredHeight * window.devicePixelRatio

and everything just magically works.

That seems the far saner way to go.

I definitely *don't* think things like GLSL variables (eg. gl_PointSize) should be in CSS units; they should stay as they are, in backing store pixels.

On Wed, Jun 13, 2012 at 8:16 PM, Gregg Tavares (社用) <gman@google.com> wrote:
Is already broken today. Find a GPU with a 2048 or 4096 pixel MAX_TEXTURE_SIZE limit. Attach a second and or 3rd monitor. Stretch the window width across the monitors until its width is > than the limit. See the bug.

With the approach I proposed, this code will work fine. 
his is how almost every WebGL app today is written.  Trying to get every app to change this is only going to result in fragmentation, with *both* being common, which is far worse.

The canvas.width and canvas.height have to make a backbuffer in those number of pixels.

If you ask for canvas.width = 10000, canvas.height = 10000, it's not *possible* to make a backbuffer at that resolution.

Glenn Maynard