- When enabled, platform-specific WebGL capabilities (like max number of vertex attributes, shader uniforms, ...) will be forced to one combination of values that works "everywhere"- If the WebGL implementation doesn't meat those requirements, WebGL context intialization currently fails (that's the first major problem IMHO)
- When navigating to github.com(!) with the feature enabled, a new popup appears warning the user about a fingerprinting attempt, and the user needs to allow or disallow this.The first question would be: does this popup appear for all WebGL code, or only when the WebGL code does specific things (like reading back framebuffer content)?If this happens for every "proper" WebGL program this would mean an additional user-action is required, which frankly is horrible, there's already too many of those (fullscreen- and pointer-lock API for instance).The browser platform should be all about getting the user quickly "into the game". If there are more actions required to start a WebGL demo than to install and start a native application, the browser platform looses its biggest appeal IMHO.- If the WebGL capability set is forced to one normalized set across the board (mobile and desktop), how would we ever be able to take advantage of more powerful desktop CPUs? I think some variability is required, at least for feature detection to properly scale between mobile and desktop.
What's your take on all this? Is this another example of where the 'security guys' have gone too far? ;)