On Tue, Apr 12, 2011 at 5:05 AM, Alvaro Segura <email@example.com>
Some WebGL applications suffer
from delays in inlitialization due to several reasons: texture
downloads, shader compiling, mesh data downloads.
Applications with many and/or very large textures have to way
for them to be fully downloaded before they can be loaded into
WebGL. Usually their onload event is used. Before that textures
are "black". It would be beneficial to support progressive
texture loading so that the user does not see a frozen scene but
sees some progress going on.
I've hit this problem using WebGL to manipulate image resources.Â I'm replacing a simple <img> on a page with a <canvas> with the same image manipulated; for example, to apply gamma adjustments.Â It works fairly well, but as you described, progressive loading doesn't work.
Now, there's a basic issue: you need to update the WebGL canvas as the image becomes available.Â If you render the entire context, that means it's drawing the entire canvas constantly as new data comes in.Â When a few new rows of image data arrive, you have to redraw the entire scene, since you don't know what's changed.Â If you just have one image this isn't a big deal--it's no different than animating a scene--but if it's just one (or several) images in a larger page it's problematic, since it'd make page loading much more expensive.
For most images, where data arrives row by row, top-to-bottom, this could be solved by exposing to scripts the region of the image which is available.Â That wouldn't work for interlaced PNGs and progressive JPEGs, which incrementally improve the entire image (though it would still be useful if it simply didn't work on these formats, which really aren't used all that often anymore).Â It would be tricky for other formats, like bottom-up BMPs, which would probably also not be worth supporting.
This would also require Progress Events support in HTMLImageElement to inform of loading progress.
Note that the same issue exists with 2d canvas.Â In that case, the part of the spec that defines this is the image "fully decodable" state; see  and .Â WebGL doesn't actually reference the term "fully decodable".Â I think it should, in the same way 2d canvas does.
 It's fantastic that "interlaced" PNG and "progressive" JPEG mean the same thing, yet are opposites in video contexts.
If one tries to do texImage2D before the "onload" event has
triggered, there is no data to load (a null image is passed),
even if part of the image has already been received. Why not
provide the partial image here? (leaving black pixels where no
data has arrived). Images in the HTML page (regular <img>)
do display progressively.
This would be especially useful with progressive JPEG or
progressive PNG that provide a full low-quality version of the
image with just a little fraction of the complete image data.
For shader compile times we have been discussing issues and
solutions in other threads. It's true that compiling currently
freezes the browser. I don't know if it could be done in a
Not directly, since you can't share a WebGL context--or anything at all--between a worker and the main thread.Â For that matter, you can't even create a WebGL context at all in worker threads--that would be nice to fix, for lots of reasons, but it's very difficult to expose DOM objects to workers, so I doubt we'll see this any time soon...
FWIW: If WebGL was accessible from worker threads, and if an extension was added to serialize and deserialize compiled scripts to allow shader caching, then that would also allow doing shader compiles in a thread.