[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] How to set a canvas backing store to display units?





On Wed, Jun 13, 2012 at 10:46 PM, Mark Callow <callow_mark@hicorp.co.jp> wrote:

On 14/06/2012 12:08, Gregg Tavares (çç) wrote:



Example:

  --fragment-shader-
 Âvoid main( void ) {
float r = mod(gl_FragCoord.x / 20.0, 2.0) < 1.0 ? 0.0 : 1.0;Â
float g = mod(gl_FragCoord.y / 20.0, 2.0) < 1.0 ? 0.0 : 1.0;Â
gl_FragColor = vec4(r, g, 0, 1);
 Â} ÂÂ

This shader will generate a 20x20 plaid pattern. It has no input to fix. Having it suddenly go 1/4 size on a HD-DPI display doesn't seem acceptable.

Similarly, having every particle in three.js suddenly draw at 1/2 size also doesn't seem acceptable.

Or having all the lines in MapsGL suddenly become 1/2 as thick.

If the author's intention is that the pattern always approximates to some particular size in human terms, then the shader above is buggy. Given the wide range of screen resolutions out there (in the proper sense of resolution) a pattern like that is obviously going to render at many different actual sizes. Perhaps the author intends it to vary in actual size.

No, it wouldn't this has nothing to with display size. It has to do with rendering pixels. No OpenGL api asks for some backbuffer of width by height pixels and is given some other size. All OpenGL programs explicitly request a size or leave it up to the OS and then query it. These units are always in device pixels. It can work no other way.

It's perfectly reasonable to use the above shader to draw a grid. Then use other more typical shaders to draw things in that grid. If the units are not device pixels everywhere nothing in GL would ever work. You could not write an image editor or do GPGPU math or do any kind of image processing if you can't count on doing math on device pixels. You couldn't line up a point sprite by setting the width or draw a map with lines of your desired with if you are using device pixel everywhere.

Â

The shader above would have exactly the same problem in native GL going from a low to a high-resolution display. In that case everything is in the device pixels you're arguing for.

It's always been true that if you want something to remain at a size having a certain visibility across different resolutions you need to pay attention to the resolution in all your computations.

I like Glen's proposal.

Regards

ÂÂÂ -Mark