[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Texture Compression in WebGL

1) Prefer an existing compressed texture format. The problem with texture compression is that they all make different size/quality tradeoffs. It isn't so much the fact that you have to manage many versions, but that you also have to qualify them and the different kinds of artifacts.

I understand this is mostly not a technical problem, but maybe the IP issues are resolvable. That being said, I would rather have to query for supported formats than to rely on a standard, novel, format for just WebGL.

2) +1 to Thatcher's transcoding. One possible advantage here is that if it is part of the WebGL, then the transcoded could actually be hardware accelerated (this could take many forms). I believe Rage has the option to use CUDA to transcode compressed (JPEG XR?) textures on-the-fly.

As Thatcher points out, there are lots of details to work out, and I'm not really sure how these get exposed to the developer. For example:

* How much time should be spent transcoding? For certain formats, like DXT1, you can spend a lot of time trying to find the highest-quality encoding. I suppose a good way to do this is with hints.

* Is the expectation that the transcoding goes from RGB to RGB? You can save on color-space conversion work if you support keeping things in the encoded color space. There are more complex cases. It is possible to use DXT5's alpha as a luminance channel, and RGB as chrominance. This is generally a higher-quality format than DXT1.

* What about chroma subsampling? Or more generally, encoding different channels at different resolutions? JPEG (optionally) and WebP start off this way, and some platforms even support chroma subsampled textures. You could give the option to encode luminance and chrominance at different resolutions by using multiple textures (e.g. luminance as full-resolution 8-bit, chrominance at quarter-resolution compressed). You could even set their texture filtering differently through texture LOD bias, or anisotropic filtering if it were supported.

This last bit may be a bit aggressive, but it is what I would think of if I were writing a native app.

3) Normal map compression.


On Sun, Oct 23, 2011 at 6:37 AM, Cedric Vivier <cvivier@mozilla.com> wrote:


On Sun, Oct 23, 2011 at 15:30, Kornmann, Ralf <rkornmann@ea.com> wrote:
> 1.       Reducing the download time for the players. We still have many
> customers with slower (~1 MBIt/s) connections.
> 2.       Reducing the bandwidth cost. This may not a problem for many sites
> today but if we deliver game content to Millions of people this would
> require quite an amount of bandwidth that is not for free.

Please let's not be confused here.

Texture compression has not much to do with improving network download
times wrt network bandwidth, can be the opposite even (without adding
counter-measures like say, transferring with  gzip content encoding).

An uncompressed texture (ie. loaded from a png or jpeg) usually has
much better compression ratios than a regular ETC/DXT compressed
texture (ie. 6:1).

Of course, for _GPU memory_ bandwidth/upload time on the other hand,
compressed textures help a lot.


You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl