[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WebGL API for available memory?



I think that the system wide slowdown is due to paging by the gpu driver which downloads pages from vram to ram and then to swap. For some uses that's ok (stopped redrawing because in background or not redrawing often). Most uses of webgl will not work when the driver starts to page.

On desktops what you'd actually like to know is how much vram you can consume before DOSing the system. On ARM systems with a unified ram (and no swapfile) you're sharing the ram with JS and everything else. The amount available to you can change without you allocating WebGL objects. The exact size of consumed vram is not actually known because you can't query how much ram FBOs, Shaders, etc. actually consume.

So if introducing such an API the following things should be addressed:

- How to query the available vram
- How to query how much objects you allocate use up
- How to query how much vram you used in total
- How to notify your application of changes in available vram

The reason that this should be addressed is because if you can't query it the following things will happen:
- DOSing peoples machines
- Triggering context loss, and an attempt to restore resources upon context restore will lead to another context loss, ping pong forever.


On Wed, Mar 6, 2013 at 10:33 PM, Evan Wallace <evan.exe@gmail.com> wrote:
I am interested in building complex WebGL applications that operate on large datasets. One of the main barriers for me is that there is no way to tell how much memory is available on the user's machine. If a WebGL app uses too much memory then it starts thrashing the GPU, which causes lots of lag for the entire OS and is a very bad user experience. I've currently been dealing with it by trying to use as little memory as possible and swapping out memory with the CPU as the computation progresses but it's a shame not to run faster on hardware with more memory.

My first attempt to get this information was to look at the RENDERER string and then compile a map of graphics cards to memory sizes, but from what I understand this information has been removed to prevent system fingerprinting and fragile string-based version sniffing (see https://www.khronos.org/webgl/public-mailing-list/archives/1011/threads.html#00205).

My second attempt was to slowly allocate more and more memory until a slowdown is detected, but this is undesirable for several reasons. It takes a lot of time to perform which hurts startup time, it's a fragile measurement since lots of other things can also cause similar slowdowns (another app opening, for example), and once the GPU memory limit has been exceeded the lag due to thrashing can be pretty bad (I've observed system-wide graphical pauses lasting around a second) and/or cause other stability problems.

I'm wondering if it would be possible to develop an API to determine the amount of available memory on the GPU, probably as a WebGL extension. Since it provides the relative amount currently left instead of the absolute total amount, it would be both much more useful to WebGL apps and far less useful for fingerprinting. Thoughts?

Evan Wallace