On Tue, Jun 28, 2011 at 6:30 PM, Gregg Tavares (wrk) <firstname.lastname@example.org>
It's hard to imagine a WebGL app that would run all that long if it did anything like this. A 512meg card would fill up on a image viewer after 500 images or less. ÂA video player would run out of memory in probably 5-15 seconds.
The issue isn't merely running out of memory entirely; it's memory waste.Â Firefox regularly takes 1.7 GB of memory for me; it doesn't crash, but it's still using a lot of memory that I'd sooner be using elsewhere.
It's pretty easy to see WebGL apps that would run for a very long time, progressively leaking memory.Â For example, applications like Google Maps load tiles as needed.Â It can take a fair amount of usage to load enough tiles to take a lot of memory, so if these aren't reclaimed it'll work for most people, especially if most users use the application briefly and close it--but it'll waste memory, break on systems with low memory, and break eventually for people who keep tabs open for a long time.
(This isn't theoretical.Â WebKit has a long-standing bug where dynamic images progressively leak memory, which is triggered by GMaps in Chrome--last I checked--and mobile WebKits.Â Just to be clear, that isn't a WebGL problem, just an analogous one.)
In short, the point is: it's fine to call this "sloppy WebGL code"--people clearly should try to get this right--but it's still ultimately a bug in the WebGL implementation.Â An object should not hold a strong reference to another unless it's specified as doing so, and applications should be able to depend on this, as they can with the rest of the Web platform.