[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WebGL API for available memory?

Thanks for all of the great feedback. The issue is even more nuanced than I thought. Heuristic formulas are interesting and are something I hadn't considered. It looks like there are two different types of measurements: relative vs absolute (free vram remaining vs total memory on the card). Providing relative measurements if they are available could be more helpful than absolute measurements since the information is more actionable, although of course any information would be great.

What were the concerns that were raised against exposing this information? Did those concerns only address absolute numbers, or did they address relative numbers too? Relative numbers seem to offset the security concerns about system fingerprinting that have been raised previously.

Kenneth, I would definitely be willing to help with testing and/or feedback for my use case. The application I'm currently developing isn't publicly accessible yet, however.


On Wed, Mar 6, 2013 at 6:17 PM, Benoit Jacob <bjacob@mozilla.com> wrote:

On 13-03-06 07:59 PM, Kenneth Russell wrote:
> Providing the currently available video memory is not possible in the
> general case. For example, GL_ATI_meminfo has three different classes
> of memory, and within each class, both a main pool and something
> called "auxiliary memory" which is not precisely defined. On Mac OS,
> the AMD OpenGL driver doesn't currently provide the "Current Free
> Video Memory" variable as the Intel and NVIDIA drivers do -- though
> this is just a bug which is going to be fixed. In addition, unified
> memory architectures obsolete the notion of dedicated video memory.
> The last time this topic was raised within various working groups in
> the Khronos organization, there was strong opposition to exposing this
> information.
> This having been said, I would like to at least provide via a WebGL
> extension the amount of dedicated video memory on the system, if using
> a dedicated GPU, or, if using a unified memory architecture, the
> amount of physical memory on the system. This could at least provide a
> vague idea to applications how much video memory they can reasonably
> allocate, even if it doesn't take into account other running
> applications on the system. Before making any such extension final, it
> would be necessary to test it with real-world applications. Evan,
> would you be willing to help test such an extension and see how well
> it works for your use case?
I think I agree with Ben that having just a very rough metric would be

I'm not sure that it will be possible to give a firm precise
measurement, as the notion of "how much memory there is" varies too much
from a device to another:
 - some devices have dedicated texture memory, while some others have
unified memory
 - beyond actual memory, there is virtual memory; some devices have
virtualized texture memory, some haven't.
 - there are many variations of what the above concepts mean on
different devices.

Not being able to give any kind of precisely defined measurement, we
could at least give a very rough lower bound.

How we compute it would depend on the device type. For mobile devices,
we could return 1/4 times the amount of unified memory. So on a low-end
phone with 256 M unified memory, we could return 64M which would match
where we start observing OOMs in practice. That heuristic may need to be
tweaked for higher-end devices, just throwing an idea here.

For PCs we could default to min (1/4 * physical RAM, 1 G) and possibly
override that if we can determine with some GL extension that there is
more memory available.


> -Ken
> On Wed, Mar 6, 2013 at 2:39 PM, Florian Bösch <pyalot@gmail.com> wrote:
>> The two extension by nvidia/ati seem to cover some common functionality,
>> there are also a number of ways to query values like that trough DirectX.
>> These cover the rather tricky topic of telling how much is free (not just
>> how much is available). We would be in principle free to define a
>> WEBGL_meminfo extension, that collates this information behind a common
>> interface. It is not possible on mobile devices (and not very useful on
>> devices with shared memory). But it's biggest use would not be on mobile
>> devices anyway.
>> On Wed, Mar 6, 2013 at 11:22 PM, Ben Vanik <benvanik@google.com> wrote:
>>> This is probably the most asked for feature I've seen with WebGL. I still
>>> really wish it would be 'solved' - even if it's not perfect, *any*
>>> information (you have <512mb or >512mb of vram) would be immediately helpful
>>> to developers *and* users (who are getting crashed tabs, sluggish machines,
>>> or worse).
>>> On Wed, Mar 6, 2013 at 2:08 PM, Evan Wallace <evan.exe@gmail.com> wrote:
>>>> You are correct that it's not part of the OpenGL spec, however there are
>>>> often ways to query this information. Look at these extensions, for example:
>>>> http://developer.download.nvidia.com/opengl/specs/GL_NVX_gpu_memory_info.txt
>>>> http://www.opengl.org/registry/specs/ATI/meminfo.txt
>>>> The number reported also doesn't need to be exact since it's only a hint.
>>>> On Wed, Mar 6, 2013 at 1:52 PM, Patrick Baggett
>>>> <baggett.patrick@gmail.com> wrote:
>>>>> There is no such API to do this from OpenGL / OpenGL ES itself, so it is
>>>>> not possible.
>>>>> Patrick
>>>>> On Wed, Mar 6, 2013 at 3:33 PM, Evan Wallace <evan.exe@gmail.com> wrote:
>>>>>> I am interested in building complex WebGL applications that operate on
>>>>>> large datasets. One of the main barriers for me is that there is no way to
>>>>>> tell how much memory is available on the user's machine. If a WebGL app uses
>>>>>> too much memory then it starts thrashing the GPU, which causes lots of lag
>>>>>> for the entire OS and is a very bad user experience. I've currently been
>>>>>> dealing with it by trying to use as little memory as possible and swapping
>>>>>> out memory with the CPU as the computation progresses but it's a shame not
>>>>>> to run faster on hardware with more memory.
>>>>>> My first attempt to get this information was to look at the RENDERER
>>>>>> string and then compile a map of graphics cards to memory sizes, but from
>>>>>> what I understand this information has been removed to prevent system
>>>>>> fingerprinting and fragile string-based version sniffing (see
>>>>>> https://www.khronos.org/webgl/public-mailing-list/archives/1011/threads.html#00205).
>>>>>> My second attempt was to slowly allocate more and more memory until a
>>>>>> slowdown is detected, but this is undesirable for several reasons. It takes
>>>>>> a lot of time to perform which hurts startup time, it's a fragile
>>>>>> measurement since lots of other things can also cause similar slowdowns
>>>>>> (another app opening, for example), and once the GPU memory limit has been
>>>>>> exceeded the lag due to thrashing can be pretty bad (I've observed
>>>>>> system-wide graphical pauses lasting around a second) and/or cause other
>>>>>> stability problems.
>>>>>> I'm wondering if it would be possible to develop an API to determine
>>>>>> the amount of available memory on the GPU, probably as a WebGL extension.
>>>>>> Since it provides the relative amount currently left instead of the absolute
>>>>>> total amount, it would be both much more useful to WebGL apps and far less
>>>>>> useful for fingerprinting. Thoughts?
>>>>>> Evan Wallace
> -----------------------------------------------------------
> You are currently subscribed to public_webgl@khronos.org.
> To unsubscribe, send an email to majordomo@khronos.org with
> the following command in the body of your email:
> unsubscribe public_webgl
> -----------------------------------------------------------

You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl