[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Buffer size and viewport



Apologies in advance for the pedantic reply.

On Mon, Jun 7, 2010 at 8:14 AM, Alan Chaney <alan@mechnicality.com> wrote:
Desktop GL programming frequently requires that the user sets the window size as part of the game/application setup. This means that normally the viewport can be set to (0, 0, displaybufferwidth, displaybufferheight). However, in a WebGL application it is likely to be very common that the window size will change due to use input.

The default with WebGL is to set the buffer size to that of the canvas element and the viewport to that as well. This means that if the window is resized to greater dimensions than the original canvas size the display buffer must be discarded and a new buffer initialized - this takes time.

How frequently will this occur in practice, during a user session? Is it unacceptable for your application to pause briefly responding to the resize? (e.g. due to inducing latency issues, etc?)

Are you optimizing prematurely, or do you have performance data - and if so, can you share it?

Note that the underlying canvas will have a fixed buffer size as well; it may be stretched to match the window size via CSS, but changing the buffer's pixel dimensions dynamically will require script.

One option that I can see is to make the display buffer considerably bigger than the canvas element - perhaps by doing some calculation based upon my UI layout and the underlying screen size and setting this value when creating the context.
Then as the canvas is resized  I simply set the viewport size to match the size of the canvas element, until, of course, it exceeds the underlying buffer size.

Multiple monitors will make this tricky - the answer given by the window.screen.width/height will change if I move my window, and is inadequate if I stretch my window to span two monitors. 
 
Does anyone have any feel to the relationship between viewport size and buffer size and performance? In other words, if I allocate a larger buffer than I actually display in the view port, is this likely to cause a significant performance issue?

Despite the skepticism, I'm interested in the answers as well. My gut would be that allocating extra space would not be worth it, but I'd love to hear from implementers if it would be a performance problem.

On a  related note: games often run full-screen at a resolution lower than the "native" resolution of the device/monitor (and/or what the OS is normally set to) to find the sweet spot between visual fidelity and frame rate. 

With WebGL, the JS engine will be a limiting factor for now at least - more so than the pixel pipeline. I would naively expect, going full-screen (possibly via an HTML5-or-later API as discussed here: http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2010-January/024872.html) at "native" resolution would not incur a frame rate hit. I'm curious if this has been explored, and/or if approaches have been discussed. Anyone have any data or pointers to previous discussions?

It's an issue my company runs into even with desktop GL; we have extremely high poly user-generated content but typically run in a window that users maximize, rather than running full-screen at reduced resolution, which induces a significant frame rate hit on lower end (CPU+GPU) hardware.

Joshua