[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] ETC texture compression.
Oliver Hunt wrote:
> I wasn't suggesting that the UA make a decision for all textures, I was meaning that when loading a texture you could tell the UA that the texture could be compressed, and whether you're willing to accept lossy compression
Yeah - but I really do need to know what KIND of lossy compression. For
example, if I have a normal map and the system is planning to do ETC1
compression - then I'm definitely going to say "No compression!" - but
if it's going to do S3TC (aka DXT1) then "Maybe" - and if it's able to
do DXT3/5 then I'll shuffle the blue component into the alpha and
happily use that.
Also, if the underlying hardware is going to unconditionally crunch my
8/8/8 bit PNG to 5/6/5 or something then ETC1 is only compressing the
texture to 3x better than an uncompressed map - so, perhaps then I'd
prefer to have my quality higher and not bother with compression.
This whole business is very touchy...and if you have to spend any amount
of time working with passionate 3D artists, you'll understand that the
nature of the compression is very important to them. They'll want to
play around with image filters to pre-enhance edges and stuff like that
if they know that their image is going to be compressed with a certain
algorithm. They understand that you can get 4x image "compression"
just by halving the resolution of the map and storing it uncompressed -
and for some textures that produces much better results than (for
example) DXT3/5 compression...but for others, much worse.
Sadly, the application really needs full access to the facts and full
control here. Each compression algorithm (since none of them are
lossless) has different problems and benefits. It's not nice that
texture loaders have to be so careful - but incurring the wrath of your
artists is no fun!
The idea of hiding the underlying compression scheme from me in the
interests of uniformity and portability will just drive me to trying to
find ever more devious and hackish ways to extract that data from the
system...so why not just come out and tell me what you're doing under
the hood so I can make an informed decision? I guess it's OK to have
a simplified interface for people who don't know or care - but there
really needs to be a way to query exactly what algorithm the GPU is
sticking us with this time - or my artists may be likely to form a lynch
> (I honestly have no idea if there are any lossless compressed texture formats)
I very much doubt it...and it's somewhat unlikely that anyone will ever
implement one in hardware because the mapping of addresses in the
original texture to addresses in a lossless compressed map would not
likely be a simple calculation. Lossless compression is (more or less
by definition) something that compresses some parts of the image better
than others. (If it could guarantee to compress all of any image by the
same amount - then it could also compress arbitary random data by the
same amount - and if it could do that, then it would be violating the
laws of thermodynamics!)
With things like ETC1 and S3TC, there is a precise mapping from original
pixel coordinates to texture data addresses. In ETC1, for example,
every 4x4 pixel block compresses to precisely 64 bits of binary data -
so the graphics hardware can easily look up any texel in the map with
just one 64 bit memory access. If you used something like run-length
encoding or zlib compression (such as PNG uses), then the hardware would
have a very hard time to figure out where in memory to fetch the texels
for coordinate (0.1234, 0.5678).
Texturing a triangle is a somewhat random-access type of processing -
and traditional image compression is typically linear...start
uncompressing at the top of the file - work down to the bottom.
You are currently subscribed to email@example.com.
To unsubscribe, send an email to firstname.lastname@example.org with
the following command in the body of your email: