[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] Gamma correction and texImage2D/texSubImage2D
On Sun, Sep 5, 2010 at 10:46 AM, Steve Baker <firstname.lastname@example.org> wrote:
> Thatcher Ulrich wrote:
>> On Sat, Sep 4, 2010 at 9:45 PM, Steve Baker <email@example.com> wrote:
>>> I agree that there are undoubtedly going to be issues with roundoff
>>> error and precision. In the short term, I'll do what I've always done -
>>> instructed my artists to paint textures in 'linear' color space and to
>>> have their monitors gamma color-calibrated every six months to ensure
>>> that they're seeing those textures optimally.
>> What format & depth are your artists saving their work in? And then
>> what texture format(s) do you convert to? How are you planning to get
>> this data into WebGL?
> Well - I have to answer this two ways - because I'm doing two jobs.
> * In my (paying) day job - I am the graphics lead for Total Immersion
> (www.totimm.com) - but I saw the same practices when I worked for Midway
> Games and in many games & simulation companies before that. I'm writing
> D3D (yuk!) graphics engines for 'serious games' for things like training
> firefighters and the pilots of unmanned drone aircraft and such. The
> artists use 8/8/8 or 8/8/8/8 PNG or TIFF (some GIS terrain tools use it)
> with a gamma of 1.0 (ie, linear color space) and we convert to DDS for
> loading into the game.
You really store with gamma=1.0 8/8/8/8? For diffuse textures, right?
I'm surprised by that. Don't you find that you get objectionable
banding in darker areas of images?
I sanity checked with some industry friends, and the consensus so far
is sRGB for diffuse maps. The equivalents of EXT_texture_sRGB and
EXT_framebuffer_sRGB are used to get proper sampling. There are
varying opinions on how to encode normal maps and specular maps, but
some advocate linear. Environment and light maps may benefit from
higher dynamic range so there are a variety of things used: RGBE
(exponential), RGBM (linear with a separate coefficient in A), two
maps combined, other stuff I've never heard of, etc.
This is corroborated by the GDC slides from John Hable (Naughty Dog,
EA) concerning the Uncharted 2 lighting.
There are a lot of slides there, but note especially:
* slide 26, showing egregious banding artifacts of using 8-bit linear color
* slide 30, showing the benefit of linear lighting
* slide 36, showing that you can explicitly linearize sRGB in your shader
* slide 37, saying that it's better if you get the hardware to do it
with D3DSAMP_SRGBTEXTURE and D3DRS_SRGBWRITEENABLE (the equivalents of
EXT_texture_sRGB and EXT_framebuffer_sRGB)
* slides 46-49 where he talks about which gamma to use for which kinds
of texture maps. (sRGB for Diffuse, linear for Normal Maps, linear or
sRGB for Specular and Ambient Occlusion)
Note that none of this implies that any manipulation of the color
components should be done by the graphics API in the PixelStore
pipeline. It is assumed that devs will feed the game engine the raw
data that the renderer is expecting to consume.
So I stand by my position that WebGL should, by default, not mess with
anybody's color components. If you want to add some explicit
parameters to do some manipulation, then feel free, but please don't
call it "correction" and please don't make it a default. I think it's
an unnecessary distraction for this group, and likely to lead some
people astray, when what would be much more helpful is getting support
for EXT_texture_sRGB and EXT_framebuffer_sRGB sometime soon after
WebGL 1.0 ships.
Also, I think it would be a disaster for WebGL to spec any framebuffer
format other than sRGB. Linear 8/8/8/8 in particular is IMO harmful
as a final output buffer, because it will cause banding in dark areas.
Anybody who tries to implement a photo gallery app in WebGL will pull
their hair out. In the future when browsers are fancier it would be
nice to have more options, but sRGB is the right way to go for now.
This course has the benefit of least surprise for existing OpenGL
programmers, and least deviation for WebGL standardizers and
If I'm not mistaken, this would allow Steve to keep his existing
pipeline, though he would need a linear-to-sRGB operation at the very
end of his pipeline. (Which he needs now anyway on non-WebGL
platforms, so it's not an extra hardship.)
As less important issues, but perhaps useful for standardization, it
would be nice to get the browsers on the same page w/r/t what to do
with PNGs that specify gamma. Also, it would be nice to provide DOM
accessors to things like the gamma (if any) that a PNG image
specifies, the bit depth and channels that are provided by a PNG, etc.
(Similar issues came up during the discussion that led to adding an
explicit internalformat parameter to texImage2D.) And someday it
would be nice to have a higher-precision texture pipeline -- ingesting
16-bit PNGs and putting them in floating point or 16/16/16/16 textures
without losing information.
> DDS supports DXT1/3/5 compression and
> no-compression which gives us the choice to use lossy compression where
> space is critical. We also have the advantage of stating 'minimum
> hardware specs' which means we never have to deal with cellphones or
> hardware that doesn't support full floating point textures and shaders.
> One day we'd LOVE to be able to use WebGL to support this stuff - but
> we're far from there yet. Part of the reason I'm following this
> mailing list is that I want to be sure that WebGL could ultimately
> support serious simulation engines and run AAA quality video games.
> * In my spare time (hahahah!) I'm starting with a small, dedicated (and
> as yet unpaid) team to put together an experimental set of WebGL-based
> games to see whether we can make money using adverts and T-shirt sales
> and in-game revenue (pay us a dollar and get that neat weapon you always
> wanted!). We currently use 8/8/8 and 8/8/8/8 PNG with a gamma of 1.0
> (ie linear color space) and do no further compression. Having looked
> carefully at ETC1, and kicked around some ideas with the guy at Ericsson
> who invented it, it's clear that it may prove useful for some kinds of
> data - but I'm still a little skeptical about it in general. There may
> be some super-devious shaderly tricks to make it do things it was never
> designed to do...but that's still a matter of investigation for me.
> In the latter case, my biggest concern is with platforms that may not
> support 8/8/8 or 8/8/8/8 formats internally and which may crunch them
> down to 5/6/5 or 5/5/5/1 or 4/4/4/4 formats internally. Since I use
> texture in 'non-traditional' ways - I'm having to get creative about
> only using the high order 4 bits in some situations. It's painful. So
> my best guess right now is that I'll be using the built-in file loaders
> with linear color space PNG.
>> There are some relevant constraints on WebGL:
>> * browsers (currently) work with 8-bit sRGB_Alpha color buffers, so
>> that's the format that WebGL output ends up in. I don't think WebGL
>> 1.0 can realistically spec anything else for output. (In the future,
>> perhaps browsers will be able to handle linear color spaces at higher
>> bit depths, but I don't know if anybody is seriously considering that
>> yet, so I doubt there's any point in spec'ing anything in WebGL 1.0.)
> That's great.
>> * WebGL has texImage2D calls that take raw buffers of data. Currently
>> (default) behavior passes this data straight to the texture formats,
>> so any automatic gamma treatment would be a change.
> Yes - that is true. I think we need the specification to say that you
> CAN automatically convert color spaces when you do this - but not that
> you MUST. That's necessary, not least because with low end cellphone
> hardware, repeatedly converting back and forth between color spaces will
> introduce so much rounding error that it'll be unusable - so
> applications will want to disable that (even if it's "mathematically
> required") in order to get the lesser of two evils.
> However, IMHO, the specification needs to support (at least in theory)
> mathematical correctness and must never impos**e incorrectness because
> by the time this specification becomes obsolete, we'll probably have 12
> or 16 bit integer and 16 or 32 bit floating point per component (I
> already use 32/32/32/32 for HDR lighting in some places in my "day job"
> graphics engine) and doing this conversion will ultimately be entirely
>> * browsers can load image formats in PNG and JPEG, which are most
>> typically 8-bit sRGB. WebGL behavior when using these formats is
>> definitely worth spec'ing.
>> * PNG has the ability to specify a per image gamma value (the gAMA
>> thing referenced earlier). Browsers appear to handle this
>> differently. see this reference page, at "Images with gamma chunks":
>> http://www.libpng.org/pub/png/pngsuite.html On the Mac I'm using
>> right now, Chrome and Safari do not do gamma correction, while Firefox
>> does. You can also clearly see the quantization errors in the Firefox
>> images with the lower gamma values. The Chrome and Safari behavior is
>> (arguably) a bug.
> Yes, I agree.
>> * PNG has the ability to store 16-bit color depth. However, my
>> understanding is that current browsers take all input images
>> (including PNG images with 16-bit color depth) and convert them to
>> sRGB_Alpha internally, before WebGL has a chance to see the data.
>> Also, the WebGL spec does not appear to have any texture formats that
>> have more than 8 bits per color component. This would be a great
>> thing to improve, post WebGL 1.0, since hi-fi WebGL apps could make
>> good use of it.
> Absolutely. Supporting (in particular) floating point textures would be
> a big win...but there are many desktop/laptop chipsets that can't do
> that - and I fear it'll be a good few years before cellphones can do
> that. But this is doable as an extension. The issues of color space
> correctness are the underpinnings of the specification and should be
> handled rigorously from the get-go because inserting them later would be
> tortuous and disruptive.
>> It seems to me there are two unresolved questions for WebGL 1.0
>> 1) Should WebGL attempt to nail down how browsers are supposed to
>> handle PNG's with a non-default gAMA value? Viable options here are:
>> a) leave it up to the browser (status quo, but behavior may differ
>> among browser; in practice apps will have to supply sRGB data and do
>> any additional conversions themselves).
> Not tolerable. Because textures are very often used for storing things
> ridiculous - doing any kind of automatic color space conversion without
> a way to turn it off would make WebGL useless for all but the simplest
> to push more work into the GPU than on a traditional OpenGL or D3D
> platform - that INCREASES the number of situations where we use texture
> "non-traditionally" in order to get good performance. If this were the
> choice - I'd stop work on my WebGL games.
>> b) demand conversion to sRGB_Alpha based on PNG metadata (i.e.
>> converge on current Firefox behavior, however non-sRGB behavior will
>> be a corner case for browsers and smart WebGL developers may opt to
>> always supply sRGB data and do any conversions themselves)
>> c) demand passing raw data straight through. 16-bit components
>> would be rounded or truncated to 8-bit. (i.e. converge on current
>> WebKit behavior, similar caveats as option b)
> d) Have the WebGL texture file loader convert whatever color space the
> file is in - to a uniform linear color space. With the option to turn
> that conversion off for files that contain non-traditional (non-image)
> data - and in cases where the roundoff error inherent in the conversion
> is not acceptable - or when the application knows that the source data
> is in linear color space regardless of what the file header happens to say.
> i.e. Support both and let the application decide.
>> 2) Should WebGL add a PixelStore option that does some kind of gamma
>> conversion? (Where the thread started.) IMO the status quo (do
>> nothing) is pretty much fine. Apps that want a specific
>> interpretation of their image data can either pre-process it so the
>> raw data matches what they want in texture RAM, or else do custom
>> processing via GPU with the existing facilities.
> Since we've now established (I hope!) that the canvas spec requires that
> the linear color space WebGL canvas must be converted into 'device color
> space' at some point before it hit the screen (which for us probably
> means "it's gamma corrected in the compositor") - there is absolutely no
> reason to ever want to do this. Automatically converting linear-space
> textures into gamma space then converting the rendering results into
> gamma space would guarantee an ugly mess on the output. That's a waste
> of CPU time, implementation effort and it's mathematically indefensible.
>>>> I believe EXT_texture_sRGB is designed to make it easier to be "linear
>>>> correct" under these circumstances. It basically allows you to tell
>>>> OpenGL that your 8-bit texture data is in sRGB format, and that you
>>>> want OpenGL to an 8-bit-sRGB to higher-precision-linear-RGB conversion
>>>> when sampling the data from a shader. Your shader will operate on
>>>> linear data (in floating point format), and the frame buffer is still
>>>> your shader's problem
>>>> Yes - but it's an extension that we can't rely on always being
>>>> implemented (I doubt ANGLE could emulate it either).
>> It can be trivially implemented on any hardware that has an internal
>> texture format with at least 12 bits per component; just convert the
>> data to linear and store in the higher-depth format. The practical
>> problem is that it wastes texture RAM, hence the preference to have
>> lookup tables in the GPU.
> But we're potentially operating with hardware that may not even support
> 8 bits per component let alone 12! So it certainly cannot be "trivially
> implemented" on all WebGL clients. Hence we certainly can't rely on it
> - and most certainly we can't write our specification based upon it!
> Worse still, implementing it simply with "lookup tables in the GPU"
> (which, I'll grant that the extension spec allows) means that the sRGB
> texels are linearly interpolated for GL_LINEAR and
> GL_LINEAR_MIPMAP_LINEAR textures - and that's wrong! When you minify a
> texture, the hardware is (in effect) calculating the contributions of
> four texels from each of two MIP levels. In linear color space, that's
> a simple lerp operation that's super-cheap to do in hardware - but in
> sRGB, doing that that gives too much weight to the bright texels and not
> enough to the dark ones. The consequences are that the texture will
> alias along bright-to-dark transitions.
> I can't imagine many GPU manufacturers building proper sRGB
> interpolators into the highest bandwidth part of their engines - so I'll
> be surprised if many of them support this extension "the right
> way"...which makes it all but useless to people who want nice-looking
> -- Steve
You are currently subscribed to firstname.lastname@example.org.
To unsubscribe, send an email to email@example.com with
the following command in the body of your email: