[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] WebGL API request: asynchronous texture uploads
Ah, sorry, bad wording - not exactly what I meant.
It's 'disappointing' --- in my opinion --- that the 'right way' (the way that leads to the best user experience of no jank/etc) of doing something requires so much work. That if a developer working in WebGL asked 'how do I make this perform better?' the answer is 'spend days or weeks or months rearchitecting your code' instead of 'change this one line that is good enough for 90% of all cases'.
Making the easy way the least performant and most likely to cause issues and then requiring a non-trivial amount of work to do it right is just encouraging a proliferation of bad experiences that negatively reflects on the developers, the browser, and WebGL. That is what I mean by 'disappointing'.
Believe me, I want off thread WebGL and resource sharing and am excited to see it land -- but I (and the software I work on) represent a small minority of web developers. Most people using threejs, impact, flash-like js libraries, emscripten, etc and will not be able to easily make use of these new systems as they would a much more accessible API. Nor would they spend the time to learn about the intricacies of Web Workers or transferrable arrays (which still aren't supported in FF) or shared contexts or cross-thread frame buffer flipping or triple buffering of resources or whatever. I know about this stuff, you know about this stuff, but I believe most people don't and shouldn't have to unless absolutely required.
And of course there's great possibility with the currently APIs to implement better handling under the covers. For example, texture/buffer uploads that are not used in the same frame could be rescheduled by the browser for decode/upload/copying/etc. Or copy-on-write memory could also be used to prevent the need for most copies in well written code. Throw in better caching of programs, better pixel pipelines to reduce software overhead of texture processing, etc. All help and require no changes.
BUT as a software developer trying to ship applications, this is scary: every browser will behave differently and it would be *impossible* to build a robust piece of software that performed well everywhere. Just look at the disparity in numbers for canvas to webgl uploads via texImage2D today, and how much slower it still is than it needs to be. I'd be disappointed if the answer forevermore is to require every developer of WebGL to measure every operation they use on every browser on every platform every time they change something to know if their application will run at 5fps or 50.
So, again - I didn't mean to disparage the off-thread work - I just don't see how it helps the issue that is the topic of this thread (and one I've been struggling with for years in all platforms, not just WebGL).
And because of that, I'd like to see discussion about adding APIs that ensure that browser implementers can expose and developers can consume WebGL functionality in the best way possible with the least chance of failure. I'd like to at least be reasonable convinced that this way is the 'right' way before wasting months or years waiting for this to be fully baked in all browsers and rewriting all of my code and finding out that it just doesn't work.