On Fri, Sep 28, 2012 at 4:37 AM, Florian Bösch <firstname.lastname@example.org> wrote:
On Fri, Sep 28, 2012 at 4:39 AM, Mark Callow <email@example.com>
On 28/09/2012 06:28, Florian Bösch wrote:
On macbooks there's a gfx control app that overrides OSX GPU selection. I
don't think a global control should be part of the browser. A web developer
can easily offer the user a choice (like SD vs. HD) and set the hint
accordingly. If anything the global control over power usage belongs into
the operating systems settings, right next to disabling wifi, bluetooth,
airplane mode etc.
There is no standard app for that. Only geeks will have the 3rd party app
That said, I agree that global control over power usage belongs in the
OS. Given the way the OS X Energy Saver preferences are set up "automatic
switching" is the choice that indicates you want to prolong battery life.
This thread is happening because automatic switching apparently isn't smart
Why isn't something like the following algorithm sufficient when the user
has selected prolong battery life?
If the app requests anti-aliasing and anti-aliasing consumes more power
ignore the request.
Start running on the integrated GPU.
If the app is calling requestAnimationFrame repeatedly and fails to
achieve 60fps, switch to the discreet GPU.
Well gfx scaling as a function of user preference is not yet considered by
any OS, most prominently not by those OSes that introduced it in the first
place. So that is part of why scale selection is being discussed here. The
other part is that there is knowledge the author of a particular use of
WebGL has, that cannot be known or deduced. For instance, I know that my
deferred irradiance demo consumes tons of GPU resources, and it has nothing
to do with aliasing or running at 60fps. But if say, wikipedias entry of an
icosaeder would like to rasterize a simple icosaeder, they *know* that their
use is very minimal, and that any GPU no matter how slow will be able to run
that @ 60FPS. And then take for instance webglstats or modernizer etc. We
know we don't want peoples machines to flap GPUs in the wind, we're not
doing anything of performance interest. There's no way you can deduce this
from the behavior of the application. This goes straight back at being an
NP-complete problem. You're trying to infer the complexity of a turing
complete program which has been shown to be NP-complete.
I think the problem of determining heuristics for automatic GPU
switching is a little less difficult than you make it out to be.
Mark's heuristics seem like they would work assuming that all of the
tracking could be put into the web browser to understand that calls
against a given WebGL context were being made on behalf of a given
chain of requestAnimationFrame callbacks. To avoid dithering between
GPUs, the switch between the low-power and high-power GPU per context
could be made unidirectional. Dynamic compilers for languages like
However, I'd be hesitant to implement a heuristic like this for two
reasons. First, Mac OS is the only OS I know of that does the "deep
magic" to automatically migrate OpenGL resources between GPUs -- at
least, this is my understanding of how automatic graphics switching
works there. On other OSs like Windows I don't know how it works; I
think the D3D device can be created against a particular graphics
adapter, and I don't think resources can migrate between GPUs. For
best portability, an up-front decision when creating the context is
Second, if switching between GPUs really does work and one of the GPUs
doesn't pass the WebGL conformance suite, switching silently behind
the scenes could cause certain OpenGL operations to start failing in
the middle of the application's run. This would cause bugs that are
impossible to diagnose.
But to be clear, this is all implementation detail. An implementation that is able to do some very clever heuristics to switch back and forth without the user knowing (other than a possible change in quality) is free to do so, just like another implementation might ignore the flag altogether.