Currently, when Edge enumerates adapters with D3D, it always picks adapter #0 for all content, including WebGL content. By default, adapter #0 is the iGPU on hybrid machines. Users can customize this behavior by either using the custom IHV control panel or checking the “Use software rendering instead of GPU rendering” checkbox. The latter, of course, will use Warp for rendering.
D3D11 allows you to share textures between D3D11 devices running on the same adapter. However, resource sharing does not work between adapters. Developers that need to migrate resources between adapters can do so by manually copying the data to the CPU from the source adapter and uploading it to a resource of the same type on the destination adapter. Not all resources can be easily read back and restored in this manner.
org[mailto:owners-public_webgl@ khronos.org] On Behalf Of Kenneth Russell
Sent: Monday, January 30, 2017 5:45 PM
To: Maksims Mihejevs <email@example.com>
Cc: Ben Adams <firstname.lastname@example.org>; Florian Bösch <email@example.com>; public <firstname.lastname@example.org>
Subject: Re: [Public WebGL] dual GPU setups default to integrated GPU
e would apply equally to WebGL 2.0 as WebGL 1.0. It wasn't dropped from the spec.
registry/webgl/specs/latest/2.mandates that a few of the context creation attributes must be honored, but that's because multisampled renderbuffers are a mandatory part of the OpenGL ES 3.0 spec. 0/#2.2
I'm not a D3D expert so don't know how feasible it is to render to a texture on one D3D device and display on another. To the best of my knowledge Edge doesn't dynamically activate the discrete GPU when WebGL's active and switch back to the integrated GPU when it isn't. It uses the "0th" GPU, whatever that is according to the control panel settings.
On Tue, Jan 24, 2017 at 6:30 AM, Maksims Mihejevs <email@example.com> wrote:
Even my home laptop by default uses integrated GPU for Chrome, regardless of battery/plug. NVIDIA Control Panel has a preset for programs, and I've seen it is set by default to Integrated.
On 24 January 2017 at 14:02, Ben Adams <firstname.lastname@example.org> wrote:
dGPU should always be used when on power; and this is only a decision that effects the choice when on battery?
On 24 January 2017 at 13:00, Maksims Mihejevs <email@example.com> wrote:
We've recognised same problem that a lot of people with Windows laptops and dual gpu get integrated gpu as default.
Although this is not the case for Edge for example. Which gives it advantage over other browsers on Windows laptops.
On 24 Jan 2017 9:26 a.m., "Florian Bösch" <firstname.lastname@example.org> wrote:
Dual GPU laptops such as some brands of windows and macOS laptops, the OS has a GPU switching function that switches to the discrete GPU in graphics intensive applications (such as games, cad software etc.)
However, when a browser is running, it is often the case that the integrated GPU is used, regardless of if a tab is doing something graphics intensive with WebGL or not (https://twitter.com/
On windows a user can influence this by a series of complicated steps to designate the preferred GPU, but it is a machine wide-setting, and sometimes it was ignored (I don't know if that's still the case).
Obviously this presents a problem for WebGL developers. Neither would we want to leech a users batteries unnecessarily, nor would we like to force a user with a discrete GPU to receive worse performance should they wish to use a graphics intensive WebGL application.
In WebGL1 there was a context creation flag called "
preferLowPowerToHighPerformanc e", but I'm not aware how widely this was implemented, and apparently it's also ignored on macOS (because it defaults to false, yet the discrete GPU is still not used).
WebGL2 has no equivalent context creation flag.
- It would seem we have a sufficient mechanism to express a GPU preference, is this a correct assessment?
- Why was preferLowPowerToHighPerformanc
e dropped from WebGL2?
- Why is preferLowPowerToHighPerformanc
e ignored for WebGL1 on some configurations where it would be most useful?
- Should an additional mechanism be introduced so a user can switch between GPUs at his choice for tabs?