[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Public WebGL] Alternative compression scheme.
- To: public webgl <email@example.com>
- Subject: [Public WebGL] Alternative compression scheme.
- From: Steve Baker <firstname.lastname@example.org>
- Date: Fri, 03 Dec 2010 00:41:18 -0600
- Dkim-signature: v=1; a=rsa-sha1; c=relaxed; d=sjbaker.org; h=message-id :date:from:mime-version:to:subject:content-type: content-transfer-encoding; s=sjbaker.org; bh=2raeClmSQCHKMCSMwFy Vgzzu4RI=; b=CPkbqnhuKvAgwTF7uEvjkRLACY+GK2twZ3025T6eMvBqaNREf9e lwOg8ZEbClilfdGb5mUvv0fssrQ7vAgB0us6sjyBdy9NlLnQPAZ9w+je5VnIwr6M 5Fsfl+6Q3fEfw07rY7WWr0ZJGkyIQiIZ8RAIhZaOxlBHwlskyCkfx5YE=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=sjbaker.org; h=message-id:date :from:mime-version:to:subject:content-type: content-transfer-encoding; q=dns; s=sjbaker.org; b=jWgj929eIXGMp Lff3l3uABbvko0jg93za+ktChF3ko3i6/9zjsPiGR7/Lf7iW5VCThjG1a6FWnZfh i2XvGq4uevyzLUbMMYPEYg4qkZBfvvo/NouSXfvUcEyPMxDsabCrpDMUmSOGd6cz lptLCG9c54ArLP6pCLpVcdVili+ylY=
- List-id: Public WebGL Mailing List <public_webgl.khronos.org>
- Sender: email@example.com
- User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:22.214.171.124) Gecko/20100520 SUSE/3.0.5 Thunderbird/3.0.5
I don't know whether there is any interest in this - but I guess I could
suggest another lossy compression scheme for WebGL textures based on
simple dictionary compression.
The reason I propose it in the teeth of so many other lossy image
compression schemes such as JPEG, WebP, DXT and ETC is that this scheme
can be decoded in the shader - either as a one-time step to avoid
as the image is rendered.
The idea is to chop your image into (say) 4x4 pixel chunks - and make a
list of them. The file itself consists of the list of 4x4 chunks (the
dictionary) - plus a 2D array of indices that point into that dictionary.
The size of the resulting file is (16*DictionarySize*Sizeof(Pixel)) +
(ImageSize*log2(DictionarySize)/26) ...which is (in practice) dominated
by the first term - the space consumed by the dictionary.
To avoid having to invent new file formats - you can store this as two
separate maps - one containing the dictionary and the other containing
the index array. Both of those could be stored as .PNG or whatever.
The format is able (depending on the encoder) to:
a) Losslessly compress images with areas of solid color, repeating
pattern or zero alpha by simply recognizing identical chunks in the
dictionary and merging them...or...
b) Lossily compress arbitary images by eliminating groups of chunks that
are sufficiently similar that they can be replaced by the average of
This scheme is very costly to encode (finding the smallest set of
sufficiently similar chunks is painful) - but it's super-cheap to
decode. It has an arbitrary quality versus size trade-off that you can
set either to produce constant compression ratios (by fixing the size of
the dictionary so it contains the N least similar chunks) or constant
quality metrics (by allowing the dictionary to be of any size and
limiting the degree of dissimilarity you allow when merging different
chunks). It can generally achieve lossy compression rates of around
8:1 to maybe 20:1 with reasonable image quality...which is much better
than either ETC or DXT.
But the most important thing is that you can decode it inside the shader
- on-the-fly if necessary. That means that you can save texture map
memory as well as download bandwidth and cache space.
To do that, you write your dictionary into one texture and the 2D index
array into another - and do a texture fetch to the appropriate chunk
index and a dependent texture read to fetch the actual texel(s) out of
the appropriate chunk in the dictionary texture.
The decoding is a lot slower than hardware texture compression
(especially if you want filtering) - but the savings on texture memory
make it useful - and it doesn't depend on having hardware support.
With 4x4 chunks, you can fairly easily adapt the shader-based decoder to
do a couple of levels of MIPmapping by MIPmapping the dictionary image
to make 2x2 and 1x1 chunks. If you need lower levels of MIPmap than
that - you can either make a new dictionary/index image for every 3
levels of MIP - or by just storing the lower MIP levels uncompressed
because they contribute so little to the overall file size.
I've used it in a couple of projects in the past - for compressing
multi-spectral satellite photography, for example - and it works
surprisingly well...provided that you aren't too fill-rate limited and
can stand the extra shader complexity.
The big advantage over ANY of the other schemes is that it does an
excellent job of compressing images with alpha planes, HDR images,
normal maps, floating point maps and other weird kinds of
texture...neither DXT, ETC, JPEG or WebP can do even a half-assed job of
any of those things.
When you have a lot of more or less similar images - you can even do the
trick of compressing all of the images into a common dictionary and just
having separate index maps for each of the original images.
You are currently subscribed to firstname.lastname@example.org.
To unsubscribe, send an email to email@example.com with
the following command in the body of your email: