[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] option for suggesting "low power" mode at context creation



You cannot come up with anything like a reliable heuristic. Perhaps the application developer is using a lot of GPU power in the first 10 seconds, but then drops off sharply. Or perhaps he starts with very little requirements the first 10 seconds, but then ramps up. Point is, you can't know. You can't know unless you wrote the program. You didn't write the program. Therefore any heuristic is counter productive because you're bound to shoot the application developer in the foot. Shooting in the foot is bad.

On Fri, Sep 28, 2012 at 9:48 PM, Kenneth Russell <kbr@google.com> wrote:
On Fri, Sep 28, 2012 at 4:37 AM, Florian Bösch <pyalot@gmail.com> wrote:
> On Fri, Sep 28, 2012 at 4:39 AM, Mark Callow <callow.mark@artspark.co.jp>
> wrote:
>>
>> On 28/09/2012 06:28, Florian Bösch wrote:
>>
>> On macbooks there's a gfx control app that overrides OSX GPU selection. I
>> don't think a global control should be part of the browser. A web developer
>> can easily offer the user a choice (like SD vs. HD) and set the hint
>> accordingly. If anything the global control over power usage belongs into
>> the operating systems settings, right next to disabling wifi, bluetooth,
>> airplane mode etc.
>>
>> There is no standard app for that. Only geeks will have the 3rd party app
>> installed.
>>
>> That said,  I agree that global control over power usage belongs in the
>> OS. Given the way the OS X Energy Saver preferences are set up "automatic
>> switching" is the choice that indicates you want to prolong battery life.
>> This thread is happening because automatic switching apparently isn't smart
>> enough.
>>
>> Why isn't something like the following algorithm sufficient when the user
>> has selected prolong battery life?
>>
>> If the app requests anti-aliasing and anti-aliasing consumes more power
>> ignore the request.
>> Start running on the integrated GPU.
>> If the app is calling requestAnimationFrame repeatedly and fails to
>> achieve 60fps, switch to the discreet GPU.
>
> Well gfx scaling as a function of user preference is not yet considered by
> any OS, most prominently not by those OSes that introduced it in the first
> place. So that is part of why scale selection is being discussed here. The
> other part is that there is knowledge the author of a particular use of
> WebGL has, that cannot be known or deduced. For instance, I know that my
> deferred irradiance demo consumes tons of GPU resources, and it has nothing
> to do with aliasing or running at 60fps. But if say, wikipedias entry of an
> icosaeder would like to rasterize a simple icosaeder, they *know* that their
> use is very minimal, and that any GPU no matter how slow will be able to run
> that @ 60FPS. And then take for instance webglstats or modernizer etc. We
> know we don't want peoples machines to flap GPUs in the wind, we're not
> doing anything of performance interest. There's no way you can deduce this
> from the behavior of the application. This goes straight back at being an
> NP-complete problem. You're trying to infer the complexity of a turing
> complete program which has been shown to be NP-complete.

I think the problem of determining heuristics for automatic GPU
switching is a little less difficult than you make it out to be.
Mark's heuristics seem like they would work assuming that all of the
tracking could be put into the web browser to understand that calls
against a given WebGL context were being made on behalf of a given
chain of requestAnimationFrame callbacks. To avoid dithering between
GPUs, the switch between the low-power and high-power GPU per context
could be made unidirectional. Dynamic compilers for languages like
_javascript_ do similar run-time measurement to decide where to focus
optimizations.

However, I'd be hesitant to implement a heuristic like this for two
reasons. First, Mac OS is the only OS I know of that does the "deep
magic" to automatically migrate OpenGL resources between GPUs -- at
least, this is my understanding of how automatic graphics switching
works there. On other OSs like Windows I don't know how it works; I
think the D3D device can be created against a particular graphics
adapter, and I don't think resources can migrate between GPUs. For
best portability, an up-front decision when creating the context is
best.

Second, if switching between GPUs really does work and one of the GPUs
doesn't pass the WebGL conformance suite, switching silently behind
the scenes could cause certain OpenGL operations to start failing in
the middle of the application's run. This would cause bugs that are
impossible to diagnose.

-Ken