On Thu, Jan 16, 2014 at 3:17 AM, Jeff Gilbert <firstname.lastname@example.org> wrote:Use cases
* Actively collecting statistics for WebGL and organizing by GPU. (WebGL stats, etc.)
* Letting a user submit a 'something's wrong/slow' bug, and having info on what hardware it is.
These don't require unconditional exposure. This can be gated behind a user-accepted doorhanger/permission dialog.
Notably, something similar to Valve's hardware survey would work fine with this mechanism.WebGL stats will not work with a permission dialog. It's embedded in over 500 sites and gets more than 15 million hits/month. It provides the kind of statistic that http://gs.statcounter.com/ does, i.e. a large sample basis. If everytime a page loaded the tracker code a user would be prompted with a security warning, my contributors would be dropping WebGL stats.It would not preclude collecting statistics entirely, but the sample basis that you get would be far, far smaller (in the order of a couple hundred reports/month if even that).* Use the driver/gpu info to estimate performance.
This can be accomplished by getting permission from the user to query this info the first time, then stored.
I would go further, and say this is a much softer requirement than most people believe. Most every game from Flash up to triple-a titles have a long history of letting the user manually select a quality (performance) level.
This use case can work with a security warning, however it would impedes giving the user a default choice that's appropriate from the get go. Sans the user clicking away the security warning, he will have to get a default choice that will be wrong until he either makes a better choice, or clicks away the warning.* Driver X doesn't handle Y properly, so we want to do something different for driver X.
In an emergency, just work around it unconditionally, if possible. (generally possible)
File a bug!
Notably, we've had numerous instances in the past where developers think a driver is buggy, but it's actually spec compliant, and the devs were relying on non-spec assumptions which held steady for only *some* drivers.
Assuming, however, a real issue, this is ideally the browser's job to handle: We're supposed to be offering a consistent API. This is huge, saving everyone else time and money. The pragmatic issue is that it takes more time to ship workarounds to users via the browser than just pushing a patch fix to production.
Selfishly, I'd love it if somehow people were forced to bother us if they hit a driver bug, and it would probably even make the web better for having been reported. (though this is clearly not a good motivation to force compliance by this mechanism) I have discovered the presence of a number of bugs only by overhearing grumbling about them. (Please report bugs! Or at least email us about them)
This isn't the entire rationale in buggy cases, and regardless I think it's unfeasible for vendors to do it due to the large configuration space of possible issues (Brandon mentioned this).I think having it gated behind a permission request would solve all major issues.Use-cases that are important that this prevents:
- Collecting statistics from all users of your application to determine a particular group of GPUs which enables you to:
- React to implicit bugs in an efficient and targeted manner (90% of visitors with this GPU didn't stay very long, had a shader bug etc. research)
- React to implicit performance issues in an efficient and targeted manner (90% of visitors with this GPU experience bad performance, research)
- React to explicit bug reports in an efficient and targeted manner (this user reported a problem, how many users with the same configuration do we have?)
- Comparative performance targeting: If a broad database of performance measurements of non synthetic benchmarks would be available, and if an information source like webglstats in its current breath of reach is available, it is possible to extrapolate expected performance across all expected platforms. A developer would first collect performance for his application on the devices he has at hand. He then would consult the benchmark database to figure out the closest group of devices which are like the ones he has at hand, so that a performance factor can be figured out. This performance factor could then be applied across all devices in the database, and consulting visitor frequency from yet the third source (webglstats etc.) he could then extrapolate this expected performance to visitors incidence. Hiding behind a security dialog deprives this kind of data handling from a substantial base of measurements that would introduce more statistical uncertainty to the point where the uncertainty vastly outweights any benefit (i.e. you get completely random guesses).