There have been a few misconceptions about how Windows works on this thread. I’d like to clarify these.
On Windows, applications can enumerate adapters (GPUs) on the system and render using any of the ones in the list. Unlike MacOS, Windows does not automatically move your content between adapters behind your back. If you want to switch
your drawing to occur on a different adapter, you will need to take care of reading back, or re-creating, the resources yourself on the new adapter.
On Windows, there is no such thing as “cross adapter sharing”. You cannot allocate a texture on one adapter and render to it using another adapter. So you cannot allocate a texture on an Nvidia GPU’s VRAM and render to it using the Intel
GPU or vice versa. You can only share resources between D3D devices created on the same adapter.
The Desktop Window Manager (DWM) is responsible for composing content rendered with multiple adapters on the system. It must also abide by the “no cross adapter sharing” rule. If you draw your content to a swap chain created on adapter
A and the user moves your application’s window to a monitor connected to adapter B, DWM will copy the output of your application from adapter A to adapter B behind your back and texture from adapter B’s copy.
You can use DirectComposition to create a tree of “visuals”, each with its own texture and have the DWM compose the tree of visuals for you instead of having a swap chain for the application window. The textures in the visual tree can
come from different adapters. But this doesn’t prevent the copies the DWM has to perform if there is a mismatch between the rendering adapter and the output adapter.
Edge has been using DirectComposition and its successor, Windows.UI.Composition, for multiple releases. I believe Chrome uses DirectComposition to render some content.
On most hybrid laptops, the output ports are directly connected to the integrated GPU (iGPU). The discrete GPU (dGPU) sits off to the side. If you draw using the dGPU, the output texture/swap chain must be copied, through system memory,
to the iGPU before you see it on your laptop screen. Usually, the dGPU much faster than the iGPU so the copy is worth it, or can happen while you draw another frame. I am told there are some gaming laptops where it reversed and the output ports are directly
connected to the dGPU and the iGPU is the one that sits off to the side, but this case is pretty rare.
I worry when people ask that browsers render “just WebGL” using the high performance adapter and keep “everything else” on the low performance adapter. This is fine if the inputs and outputs of WebGL are all contained in their own island.
But, as we know, Web developers can upload images, SVG content, canvas elements, ImageBitmaps and videos to WebGL textures. In a dual GPU rendering case, that content has to be transferred between adapters, through system memory, before it can be used by
WebGL. I am not too concerned about images or other static content. I do, however, worry about the 4K-floating point pixel-HDR-360 video being transferred every frame. I suppose we can keep content on both GPUs and heuristically determine which GPU is being
used more often for the video case, or decode in both places perhaps. But, in the meantime, web developers that ask for “high performance” may be in for a surprise on some hardware.
From: firstname.lastname@example.org <email@example.com>
On Behalf Of Kai Ninomiya
Sent: Tuesday, July 3, 2018 11:42 AM
To: Rachid El Guerrab <firstname.lastname@example.org>
Cc: email@example.com; firstname.lastname@example.org; Dean Jackson <email@example.com>; Kenneth Russell <firstname.lastname@example.org>; email@example.com
Subject: Re: [Public WebGL] Use powerPreference to request high or low power GPUs
If a system has only an integrated card, it will always get the integrated card regardless of power preference. Power preference won't prevent the context from being created, AFAIK.
Thanks for the explanation.
I'm still a bit confused as to the intent here. So please bear with me :-)
When you conceived of this update, was the idea that the trend will be more dual GPUs?
If a system only has the integrated card, does it mean it'll only create contexts that ask for "low-power", no matter what the performance of the GPU is? or are there more considerations?
Are you just looking to know if a context doesn't need full rendering performance and therefore would be fine if pushed to the integrated GPU? Is this more helpful to the system as a whole and not useful for the specific content?
And what system decides to switch the context to a lower profile? the browser? the OS?
Outside of the tab hidden, and maybe "low battery" on the host computer, do you know of other cases where the context might be switched from high performance to low?
> So your content has to be designed to run on a wide range of hardware
For exactly these cases...
> Would being able to check the actual value we used when creating be enough?
...could knowing (a) what profile was actually used at init time and (b) what profile is active after a switch potentially be useful, at least as a fall-through, if a site can't match the vendor string to some known pattern?
On Mon, Jul 2, 2018 at 8:12 PM, Jeff Gilbert <firstname.lastname@example.org> wrote:
Cross-adapter sharing is possible on Windows, but only via
DirectComposite, which no one leverages yet, to my knowledge.
On Mon, Jul 2, 2018 at 7:32 PM, Dean Jackson <email@example.com> wrote:
> Hi Rachid,
>> On 3 Jul 2018, at 10:37, Rachid El Guerrab <firstname.lastname@example.org> wrote:
>> 1) Do you have statistics on how many people run WebGL on laptops with dual cards? Just curious why you think it's a small set..
> As far as I'm aware, the MacBook Pro 15" is the only laptop that has dual GPUs and can dynamically swap between the two (and not all configurations of MacBook Pro 15" have dual GPUs). I'm not familiar with Windows, but when we discussed this in the group
I remember hearing that dual-GPU Windows devices can only use one GPU at a time (i.e. it doesn't matter what the content requests because the browser doesn't have a choice). This might change in future versions of Windows. I don't know if Linux handles this
configuration at all.
> For the MacBook Pro case, Apple doesn't release sales data by model, so I'm not sure how popular it is in comparison to MacBooks and MacBook Airs.
> But I think it is ok to guess that it is a fairly small set, firstly in comparison to the total number of laptop users, then the total number of desktop OS users, then to the total number of users on mobile and desktop.
>> 2) I get that I can query the vendor string.
>> But the webgl committee creates this neat API, and vendors spend time implementing it, to give us some useful abstraction to GPU power, in realtime, which is awesome.
>> And now you're telling me I should ignore all that work and query the string myself? What's the point then??
> Would being able to check the actual value we used when creating be enough? In Safari, you do actually end up getting what you want most of the time. However, it can change as the user hides the tab or application. You can detect this by listening for a "webglcontextchanged"
event (although I just noticed this never made it into the specification, so it's non-standard :( )
>> My content can adapt in many ways if I know I've switched to a lower profile, at the beginning or dynamically.
>> But if i don't know, then what's the point? A message to the user that wouldn't know what to do about it?
> Let's consider the case of two MacBook Pros - one with a second GPU, one without. The "low-power" GPU on the first is both the "low-power" and "high-performance" GPU on the second. If you decide that your app *really* needs to run on the best GPU, you'd ask
for "high-performance". But on that second device, you're not getting a more powerful GPU. So your content has to either:
> - be designed to run on a wide range of hardware
> - query the GPU vendor string and hopefully know what that means for your app
> And this still applies even if there was no way to even request a high or low power GPU, or to older dual-GPU hardware where the high-performance GPU is slower than today's low-power GPU, or actually to any other hardware.
> I'm not arguing with you btw - just pointing out that it doesn't really matter whether you get one GPU or another. You have to assume the worst unless you're willing to check the vendor string and know what it means to your app. The powerPreference parameter
gives the author the ability to indicate that their content is (hopefully) "simple" enough to not need the fastest GPU (e.g. it isn't a full-page game or a cryptocurrency miner).
>> - Rachid
>>> On Jul 2, 2018, at 5:13 PM, Dean Jackson <email@example.com> wrote:
>>>> On 3 Jul 2018, at 01:59, Rachid El Guerrab <firstname.lastname@example.org> wrote:
>>>> I second Gregg Tavares's question about what's reported back.
>>>> How can we tell if we're running with the high-performance option or not?
>>> Why should it matter? A relatively small set of people have dual GPU systems - and most people don't have powerful GPUs. And that's before you consider mobile devices.
>>> Also, in Safari on macOS, you don't necessarily get what you ask for anyway. You might ask for low-power but get high-performance because another app (or page) on the system has fired up that GPU. In other words, you have to write your content to work on
the average GPU.
>>> But if you really have a good reason to know, you can query the GPU vendor string. It would be up to you to decide whether you think that's a high-performance GPU.