In regards to WebNXT/WebGPU/etc. I'm afraid that it'll turn out to be a longwinded laborious process to arrive at a single point of focus (years). As every UA vendor constantly cites (human) resource shortage, this implies to me that it could seriously delay any more advanced 3D API on the web for considerable time as opposed to "just" using one that's there (like OpenGL ES 3.1).Regarding shader compile times, this is a long discussed issue and it boils down to a few simple things.
- There is no way to construct modular shaders in any way
- GL is locked into plain-text shaders on which it does lengthy compile cycles
- D3D is locked into a HLSL compiler which is pretty bad performance wiseVulkan has attempted to rectify that situation with SPIRV, but this isn't an option in ES (because SPIRV would just compile to ESSL and incur the same compile times), in GL (because SPIRV isn't available universally) or in D3D (it'd still go through the HLSL compiler).What is really needed isn't so much a change on the frontend, but a fundamental change on the backends, in that:
- There needs to be a way to compose programs of many readily available and loaded on the GPU bits&pieces that don't need further massaging (doesn't presently exist at all)
- There needs to be a way to deliver those bits&pieces to the GPU without the driver going through lengthy compile cycles.Unless those two points above are solved, all Web* (WebGPU, WebNXT, WebGL, etc.) will suffer from lengthy compile times.On Mon, Feb 20, 2017 at 12:48 PM, Maksims Mihejevs <email@example.com> wrote:I personally believe it is better to work on single API and get it best, instead of different parties going to jump out and start working on own API's.For engine developers - this just complicates everything.For app developers if they use vanilla GPU API's, then they target only single platform (Apple comes up with one, Microsoft with another, and Google with third. The rest, who knows) - this scenario would be a disaster and back to 90s..00s.We want avoid previous mistakes on such things.Great thing about WebGL - it is influenced by many parties, and they all can start a conversation and everybody can actually make difference.Another major point, is currently GPU performance and lack of very-low-level access to GPU is not a problem with WebGL application performance.Right now we are all in WebGL limited to single big thing - shader compilation times. With such large times it takes to compile any decent shader, rich content will be never possible in the Web.This been raised, and it is constantly being a biggest bottleneck.We've recently made a demo to present WebGL 2.0 features in close collaboration with Mozilla. (WebGL 2.0 capable browser is required: https://playcanv.as/
p/44MRmJRU/)And here, it downloads ~19mb of data, but takes on majority of platforms 70%+ in shader compilation. And we did very careful profiling over each shader, and made a lot of notes about how slow it is without many reasons. We've simplified most shaders we could and pre-compiling them on loading phase. But this is very limiting, as many applications need to compile shaders on the go, but this creates huge stalls. Before we made compilation in batches split in different animation frames, it was stalling many browsers rendering them unresponsive, to user it seems like they crash.This is insanely bad, and before we even think about WebGPU or WebNXT, we need to solve issues that relates to Web 3D platform as a whole - make shader compilation faster.Cheers,
MaxOn 20 February 2017 at 01:29, Joshua Groves <firstname.lastname@example.org> wrote:
I've been following the WebGPU proposals and commentary closely
n/issues/1), and it seems WebGPU aims to
target several concerns which are primarily performance-based:
1. Low level access to the GPU
2. Functionality that doesn't currently exist in WebGL 2, possibly
including command queues, SPIR-V support, multi-threading by sharing
contexts across web workers, improved async support, etc.
3. More object-oriented API or builder-like functionality to improve
efficiency in some way (i.e. decrease precondition checks)
Numerous security concerns have been raised about point 1 in
considerations whether to map an API such as Vulkan directly (i.e.
similar to the mapBuffer[Range] security issue with WebGL). If the API
cannot be that low level, it would likely operate at a level closer to
the existing WebGL API.
Based on the above, I have been wondering about the future plans for
WebGL. Instead of creating a new API for WebGPU, is it feasible to
incorporate some of the proposed WebGPU concepts into WebGL to improve
performance? A complete list of proposed WebGPU functionality has not
been defined yet, but I expect this to be decided soon after a number
of WebGPU proposals have been submitted.
For example, where feasible, it may be useful to incorporate AZDO
ideas from OpenGL into WebGL through extensions. As well, SPIR-V
support has been mentioned on this mailing list before and could be a
useful addition. It looks as though async support is starting to be
added through extensions (i.e. WEBGL_get_buffer_sub_data_asyn
of these concepts could contribute to WebGL performance improvements,
however the extent of these improvements is unclear (i.e. relative to
a new API).
I am interested in the community's thoughts on this approach and
viewpoints on the future of WebGL in general (especially with respect
to the proposed WebGPU). I have also been wondering whether
OpenGL/OpenGL ES have considered future changes in this direction. If
anyone has any insight into this, it may be useful to consider.
You are currently subscribed to email@example.com.
To unsubscribe, send an email to firstname.lastname@example.org with
the following command in the body of your email: