[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Lifetime of WebGL objects in Firefox and Webkit





On Tue, Jun 28, 2011 at 4:30 PM, Glenn Maynard <glenn@zewt.org> wrote:
On Tue, Jun 28, 2011 at 6:30 PM, Gregg Tavares (wrk) <gman@google.com> wrote:
It's hard to imagine a WebGL app that would run all that long if it did anything like this. A 512meg card would fill up on a image viewer after 500 images or less.  A video player would run out of memory in probably 5-15 seconds.

The issue isn't merely running out of memory entirely; it's memory waste.  Firefox regularly takes 1.7 GB of memory for me; it doesn't crash, but it's still using a lot of memory that I'd sooner be using elsewhere.

It's pretty easy to see WebGL apps that would run for a very long time, progressively leaking memory.  For example, applications like Google Maps load tiles as needed.  It can take a fair amount of usage to load enough tiles to take a lot of memory, so if these aren't reclaimed it'll work for most people, especially if most users use the application briefly and close it--but it'll waste memory, break on systems with low memory, and break eventually for people who keep tabs open for a long time.

(This isn't theoretical.  WebKit has a long-standing bug where dynamic images progressively leak memory, which is triggered by GMaps in Chrome--last I checked--and mobile WebKits.  Just to be clear, that isn't a WebGL problem, just an analogous one.)

The basic problem with *requiring* deallocation is that it imports all of the classic problems of explicit memory management into _javascript_, requiring very careful transfer of ownership, exception handling/error code paths, and so on, as you have to do in C and C++.  It's not the normal case that's hard; in complex applications it's these less common cases where you're likely to see subtle, progressive leaks.

In short, the point is: it's fine to call this "sloppy WebGL code"--people clearly should try to get this right--but it's still ultimately a bug in the WebGL implementation.  An object should not hold a strong reference to another unless it's specified as doing so, and applications should be able to depend on this, as they can with the rest of the Web platform.

My point is, since it's browser dependent when GC happens then fixing this doesn't fix the problem. If you have 256meg of vram. You allocate 200 of it, let 100meg of it lose all references and then try to allocate 150meg more there is NO GUARANTEE that your GC is going to free those objects in time for your request and therefore free up that 100meg so your 150meg allocation will happen.

Fixing this bug doesn't solve that problem for WebGL developers. 
 
Are you suggesting the spec should change to force that to happen, losing a reference must immediately free its resources? I know of no GC that works that way. Are you suggesting that before every call to glGenXXX, glCreateXXX, glBufferData, glTexImage2D, glCopyImage2D, and glRenderbufferStorage that the browser do a GC? Are you suggesting that after every one of those calls we should check for GL_OUT_OF_MEMORY, then do a GC and then call it again?

Those will get the behavior you seem to want. Just making the WebGLContext use weak references fixes NOTHING

 

--
Glenn Maynard