[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Behavior of WebGL canvas when it can't make a backbuffer of the requested size?



On Fri, Nov 5, 2010 at 9:28 AM, Chris Marrin <cmarrin@apple.com> wrote:
>
> On Nov 4, 2010, at 5:13 PM, Kenneth Russell wrote:
>
>>> ...
>>> If I have a 9 monitor setup and I stretch the window across all 9 1280x1024
>>> monitors and my max back buffer is 2048 then I get a 2048x128 backbuffer?
>>> That doesn't seem like the result I want. Most of the time I want the
>>> largest resolution I can get.
>>> If your app needs a 1.0 scale ratio then query the max backbuffer size and
>>> then set the size of the canvas appropriately. Only about 4 of the last 60
>>> 3d apps I've written would have needed this. Most of my apps in WebGL pick a
>>> fixed backbuffer size and let the canvas scale automatically.
>>> It seems like it's better to do the best thing for the majority of apps.
>>> Those few apps that need a 1.0 scale ratio can do what they need to force
>>> it.
>>
>> I agree. The majority of the 3D applications and demos I've written
>> handled resizing the window to arbitrary sizes, and adjusted the
>> projection matrix as necessary.
>
> So you're saying you'd have to rewrite every one of those apps using getDrawingBufferScale() to get the correct results, right?

Actually, no -- for these apps I'd only need to use canvas.clientWidth
/ clientHeight. Non-square pixels wouldn't affect the behavior of the
app.

> I still have not seen a proposal expressed in a clear and concise way. I think we all agree that we can't do this in a way that is completely transparent to the user. Does that mean we want to give the author no automatic behavior? I think giving the author partial help (e.g., "fixing" glViewport params) is worse than giving no help at all. And if we are to do non-square aspect ration changes to the drawing buffer, the we can't have a getDrawingBufferScale() call.
>
> So I'll make a simple proposal that upon creation of the drawing buffer we automatically resize dimensions that are too large to the maximum allowable width and height. For instance, a request for a 10,000 x 500 canvas on a machine that has a 2048 pixel dimension limit, we resize to 2048x500. I also propose that we don't change the viewport() call or any other call, we simply provide getCurrentWidth() and getCurrentHeight() calls, or the equivalent.

Agreed on all counts.

> The last question is whether or not there is a call that will tell the author the maximum dimensions of the drawing buffer. There is MAX_VIEWPORT_DIMS which will probably give the right answer. But I don't think there is any guarantee that the window system maximums are reflected in MAX_VIEWPORT_DIMS. So we should either clarify that this value will always give the right answer or create a new call to give the max dimensions.

I don't think this is feasible. In certain low memory situations I
could imagine that a WebGL implementation might not be able to
allocate a backing store texture of the maximum dimensions, or in fact
even know what the largest allocatable texture is at the moment
without actually trying the allocation since OpenGL ES doesn't have
proxy textures.

Instead I think the spec should simply state that the dimensions of
the back buffer may be clamped to implementation dependent maximums.
The getDrawingBufferSize() API may be used to query the allocated size
of the drawing buffer.

-Ken

> I think the above is sufficient. Most modern graphics cards have a 2k x 2k limit and it will not be very common that a browser user will size their window that big. When they do, the displayed image will not fill the canvas. But that's not a fatal problem.
>
> -----
> ~Chris
> cmarrin@apple.com
>
>
>
>
>

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: