[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WebGL API for available memory?



Assuming that your audience is technical enough to care/understand these
matters, I'd start with a reasonable default for your target audience,
and then:
  - offer a configuration setting to use different sizes at user's "own
risk".
  - do check at all times for OUT_OF_MEMORY errors (after
bufferData/texImage2D), downsize accordingly.

As others have pointed out even native applications have a very hard
time measuring available general memory, GL memory is even harder and
not always possible to measure, and even when things can be measured at
all, the result isn't necessarily what you wanted to know (e.g. a system
could have many Gs of virtual memory, but actually using it could render
it unusably slow if that virtual memory is in reality a slow swap file
--- that's a very common setup on PCs).

Benoit

On 13-03-06 04:33 PM, Evan Wallace wrote:
> I am interested in building complex WebGL applications that operate on
> large datasets. One of the main barriers for me is that there is no
> way to tell how much memory is available on the user's machine. If a
> WebGL app uses too much memory then it starts thrashing the GPU, which
> causes lots of lag for the entire OS and is a very bad user
> experience. I've currently been dealing with it by trying to use as
> little memory as possible and swapping out memory with the CPU as the
> computation progresses but it's a shame not to run faster on hardware
> with more memory.
>
> My first attempt to get this information was to look at the RENDERER
> string and then compile a map of graphics cards to memory sizes, but
> from what I understand this information has been removed to prevent
> system fingerprinting and fragile string-based version sniffing (see
> https://www.khronos.org/webgl/public-mailing-list/archives/1011/threads.html#00205).
>
> My second attempt was to slowly allocate more and more memory until a
> slowdown is detected, but this is undesirable for several reasons. It
> takes a lot of time to perform which hurts startup time, it's a
> fragile measurement since lots of other things can also cause similar
> slowdowns (another app opening, for example), and once the GPU memory
> limit has been exceeded the lag due to thrashing can be pretty bad
> (I've observed system-wide graphical pauses lasting around a second)
> and/or cause other stability problems.
>
> I'm wondering if it would be possible to develop an API to determine
> the amount of available memory on the GPU, probably as a WebGL
> extension. Since it provides the relative amount currently left
> instead of the absolute total amount, it would be both much more
> useful to WebGL apps and far less useful for fingerprinting. Thoughts?
>
> Evan Wallace
>


-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------