On Jun 28, 2011, at 5:16 PM, Gregg Tavares (wrk) wrote:
On Tue, Jun 28, 2011 at 5:08 PM, Glenn Maynard <firstname.lastname@example.org>
On Tue, Jun 28, 2011 at 7:42 PM, Gregg Tavares (wrk) <email@example.com>
My point is, since it's browser dependent when GC happens then fixing this doesn't fix the problem. If you have 256meg of vram. You allocate 200 of it, let 100meg of it lose all references and then try to allocate 150meg more there is NO GUARANTEE that your GC is going to free those objects in time for your request and therefore free up that 100meg so your 150meg allocation will happen.
If you want a solution fine. My point was the thing you were suggesting (weak references on WebGL resources from the WebGLContext) is not a solution. You want a solution then argue for something that would actually solve the problem (like the 3 suggestions I gave)
How about your idea that after every call to glGenXXX, glCreateXXX, glBufferData, glTexImage2D, glCopyImage2D, and glRenderbufferStorage the browser would check for GL_OUT_OF_MEMORY, then if so do a GC and then try the call again?
It may be more than the spec requires, but it does address the issue of the sloppy code that's leaked too much to keep going.
Are there any draw backs to this behavior that I don't see?