[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Lifetime of WebGL objects in Firefox and Webkit

On Jun 28, 2011, at 5:16 PM, Gregg Tavares (wrk) wrote:

On Tue, Jun 28, 2011 at 5:08 PM, Glenn Maynard <glenn@zewt.org> wrote:
On Tue, Jun 28, 2011 at 7:42 PM, Gregg Tavares (wrk) <gman@google.com> wrote:
My point is, since it's browser dependent when GC happens then fixing this doesn't fix the problem. If you have 256meg of vram. You allocate 200 of it, let 100meg of it lose all references and then try to allocate 150meg more there is NO GUARANTEE that your GC is going to free those objects in time for your request and therefore free up that 100meg so your 150meg allocation will happen.

The same can be argued of *all* allocations in _javascript_.  By your reasoning, _javascript_ shouldn't have GC at all and we should all go back to explicit memory management.

_javascript_ GC is tied to _javascript_, not to OpenGL. That's my point. When _javascript_ runs out of memory it runs the GC. When OpenGL runs out of memory.......Nothing.

If you want a solution fine. My point was the thing you were suggesting (weak references on WebGL resources from the WebGLContext) is not a solution. You want a solution then argue for something that would actually solve the problem (like the 3 suggestions I gave)

Glenn Maynard


How about your idea that after every call to glGenXXX, glCreateXXX, glBufferData, glTexImage2D, glCopyImage2D, and glRenderbufferStorage the browser would check for GL_OUT_OF_MEMORY, then if so do a GC and then try the call again?

It may be more than the spec requires, but it does address the issue of the sloppy code that's leaked too much to keep going.
Are there any draw backs to this behavior that I don't see?