For a long time, there have been ideas floating in the air about exposing some kind of performance metrics to content. I used to regard this as a relatively low priority, but I have to agree that with the advent of software rendering fallbacks, this is becoming a much higher priority: indeed, if you take an app that defaults to WebGL and has a non-WebGL fallback (examples: Angry Birds, Google MapsGL), it is conceivable that the non-WebGL fallback will often be preferrable over doing WebGL with a software renderer.
I could live with a one-bit "this is a software renderer" flag, but reluctantly. Indeed, "this is a software renderer" is hard to define in a future-proof way, and masks a reality of hw-accelerated implementations that have slow software fallbacks for certain tasks, which is a problem for real WebGL applications today.
So, if possible, I would rather have some rough performance metrics, as has been discussed recently. Something like Windows' Experience Index (as Ken proposed recently), but that we would design ourselves so we could control it and make sure it can be implemented consistently across browsers, OSes, etc. Like WEI, we should try to strike a balance between too few metrics and too many. I think that somewhere between 2 and 4 scalar metrics should be the right compromise. How about 3 metrics, roughly doing the following, rendering to a framebuffer of the resolution of the client device's display:
1. performance with a typical simple vertex shader + simple fragment shader
2. performance with a complex vertex shader with vertex shader texture access + simple fragment shader
3. performance with a simple vertex shader + complex fragment shader
The browser could cache benchmark results and invalidate them when something important changes (e.g. a driver update).
Regarding your other question about encouraging the user to update drivers: I'd let the application do that if they really want to. The typical Firefox user either doesn't know what a driver is, or doesn't want to be bothered about it, or wouldn't know how to do it, or can't anyway because their OEM locked down driver updates. The application might know more about the user than we do, e.g. a hardcore 3D shooter game might be able to assume that its users care about graphics drivers. What we should do to help, is implement the webglcontextcreationerror event --- I feel bad that we still haven't done it.
Firefox and Chrome implement driver blacklists for WebGL to prevent buggy drivers crashing the user's computer. I've read this is almost exclusively due to the drivers being out of date, so the user updating their drivers ought to fix the problem. More recently, I've read about Google and Mozilla making moves to add software-rendered WebGL implementations so users who have blacklisted drivers can still see content.
For real-time games, software renderers really don't make for a good playing experience on the common mid to low end machine. Poor performance is one of the most common criticisms we've heard with our HTML5 game engine, and it only comes from users who get software rendering. Some combinations of game and machine result in unplayable performance. Often if we get the user to update their driver then the GPU gets used and the game runs excellently. It's frustrating that the player may have a bad experience when they actually have the necessary hardware to run the game really well, but just lack a decent up-to-date driver.
While software rendering is one way to solve the problem, it seems to me that it's worth trying to solve it by getting users to update their drivers. This is more complicated for the user, but has a better end result. Browsers could prompt users to upgrade their driver when it's blacklisted, but it may be unnecessary depending on the content. There's no good way for the content itself to detect software rendering (and apparently the term is vague, since some GPUs use part software processing). However, if the fact that the user's graphics driver is on the browser's blacklist is exposed to the content (a simple flag), the content itself can issue some kind of notification if appropriate. This does not involve exposing any hardware details, does not need to determine a definition for software rendering, does not require running performance tests to try to detect software rendering, and could result in a much better end-user experience since they end up using the GPU they paid for.
Is this something that might be suitable for the WebGL spec?