[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Webp



On Wed, Oct 20, 2010 at 4:56 AM, Patrick Baggett <baggett.patrick@gmail.com> wrote:

Hate to ask the obvious question, but is there something wrong with PNGs or JPEGs? They both perform compression, and in the case of JPEG,  offer space/quality trade-offs. Since you've moved outside the realm of real time compression/uncompression, why not settle on those -- the browser support is great as is the tool support to create them. You can even use nifty utilities like pngcrush.

Apologies up front - this does not turn into anything actionable on the WebGL front. 

As a single data point, let me share a particular use case and requirements, which I use to evaluate any new image format for the Web that is proposed (WebP, KTX, etc...)

The online virtual world "Second Life" which has environments with densely packed mash-ups of unbounded user generated content - which in game terms means high poly, huge textures, worst case for a rendering engine - settled on JPEG2000 for textures for a handful of reasons:

* Both lossy and lossless compression options
* Multiple channels (RGB + A + ... - extras used for bump/normal maps)
* Multi-resolution/progressive transmission - given the first portion of the file you can decode a lower resolution version of the bitmap
* Better compression than JPEG

JPEG2000 also supports random access to areas of the image, although SL does not use this feature. 

The progressive nature is critical, since arbitrary user-generated content often uses huge textures (e.g. 512x512) for items that may not nominally span many screen pixels (e.g. a single gem on a necklace). Being able to access only the first e.g. 1k of the texture, then downloading the rest only as needed is critical.

Thinking about implementing the ability to render this sort of user-generated content in the browser via WebGL, it would be lovely if browser vendors had support for an image format with similar characteristics. The mechanics of asking the browser for only a particular resolution of an image would need to be designed, of course. (Unless the bits are fetched using XHR and handed to the browser using data URIs... but let's not go there for now.)

FWIW, the KTX format (per http://www.khronos.org/opengles/sdk/tools/KTX/file_format_spec/) "Mipmaps are stored in order from largest size to smallest size" - that's unfortunately the exact opposite of what would be desirable in this scenario. *sigh*

Specifically regarding PNG and JPEG as options:

* PNG supports Alpha channels but not lossy compression. 
* JPEG supports lossy compression but not Alpha channels

While a from-scratch system could push that decision to the author, it limits flexibility and would be problematic for existing content (e.g. that's stored as DXT or ETC or JPEG2000 or ...)

Specific resolutions could be requested by the client as distinct resources (e.g. GET /assets/texture1234.png?w=128&h=128), which would work in existing browsers and could take advantage of HTTP caching. Relative to a progressive image format, this is inferior in two scenarios: 

(1) Gradually zooming in on the object so it gradually spans more screen pixels, so initially requesting the 1x1, 2x2, 4x4, 8x8, ... NxN resolutions of the texture. With a non-progressive format, the earlier data is discarded, wasting network resources.
(2) Early rendering of an object that spans many screen pixels - this is the similar to the use case that interlaced GIFs served in the 1990s, when bandwidth was scarce. In a virtual world it is often preferable to render e.g. a mottled brown texture for a wooden surface while waiting for the highly detailed wood texture to download, and refining the presentation as more bits are available. With a non-progressive image format, this is not possible unless extra fetches are made then, which will delay the presentation of the final full-resolution texture.

-- Josh