[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Gamma correction and texImage2D/texSubImage2D

Gamma correction is a tricky business.

For typical 3D applications you want your images with NO gamma
correction.  That's because you're going to go on to perform lighting,
fogging and a number of other operations on the texture before it's
displayed.  The equation for gamma is

     Vout = pow ( Vin, gamma ) ;

...where gamma is around 1/2.2 for CRT's and CRT-emulators such as LCD's
and plasma displays.  It is clearly the case that in general:

    pow ( Vin * light, gamma ) != pow ( Vin, gamma ) * light.

So gamma-correcting the input is in no way a substitute for gamma
correcting on the output.

Put non-mathematically - the main thing that gamma correction does is to
increase the contrast in dim areas and reduce it in bright areas as a
better match for the non-linearities inherent in CRT's. 
Gamma-correcting the input to the renderer can do nothing to increase
the brightness in areas where it is dark because there is only a little
light being cast.  So you still get overly dark areas in the resulting
rendering - even though you pre-gamma'd the texture.

You could (in principle) build a really complicated lighting and fogging
algorithm that applied light in a non-linear way to preserve the gamma
correction...but the math is ugly and it has to be done in the fragment

However, there are other things going on in the graphics pipeline such
as magnification and minification, antialiassing, alpha blending and
compositing - all of which are inherently linear operations over which
we have no software control whatever.

The RIGHT thing is therefore to provide linear textures, to do your
rendering in linear domain - and then to apply gamma correction to the
FINAL image.  Any other way of doing it is mathematically wrong - and
looks noticably nasty.

Hence, the default should be (as it is with OpenGL) which to NOT mess
with the texel data...at least not by default.

There are actually distinct three cases to consider:

1) Your source texture came from a camera or something else that applies
gamma correction before the image is saved.   In this case, you need to
apply reverse gamma-correction to that image in an effort to get a
linear texture - then do your lighting - then to gamma-correct the final

2) Your source texture is already in linear space - you'll do lighting
in linear space - and then you'll need to do gamma corrections on the
final rendering.

3) You are doing no lighting/blending/mipmapping/fog/etc and (for some
reason) you have also chosen not to do gamma correction at the end.  In
that case and ONLY in that case, you should gamma-correct your textures
on input.

I maintain that very few WebGL applications will do (3).

IMHO, the option should be to direct the browser's compositor to apply a
gamma-correcting shader as it does final
image composition with the output of the application's canvas.   That
way everything prior to that is in linear color space where life is easy.

Doing inverse gamma-correction of images that have somehow been
gamma-corrected already (JPEG's mostly) or for things grabbed off-screen
that have already been gamma-corrected once is perhaps defensible as
"the best you can do under trying circumstances" in case #1, above - but
we shouldn't design the system to do that by default because going into
and out of gamma produces lots of roundoff error.

  -- Steve

Cedric Vivier wrote:
> On Fri, Sep 3, 2010 at 10:01, Kenneth Russell <kbr@google.com
> <mailto:kbr@google.com>> wrote:
>     Questions:
>     1. What should the name of the new (boolean) pixelStorei parameter be?
>     The name which would most closely match the other parameters would
>     probably be UNPACK_CORRECT_GAMMA_WEBGL, where "correct" is a verb.
>     However, this name is probably confusing (why would you ever want
> The latter certainly sounds less confusing.
>     2. What should the default value of this flag be? If it were false,
>     then for images uploaded from the browser to WebGL, the default
>     behavior would be for the pixels to be completely untouched. However,
>     this might be surprising behavior to applications displaying images on
>     screen with very simple shader code (no lighting) and expecting them
>     to look the same as surrounding browser content.
> IMHO this use case would only be likely with WebGL-based image
> editing, in most other applications (games, object viewers, etc) the
> final pixels might be too transformed through perspective, filtering,
> mipmapping, lighting, normal maps, light maps and so on, for slight
> gamma correction to really matter, so default should be false for
> least surprise when using non-image data.
> However how does the browser typically handle gamma correction? Does
> it perform it depending on image metadata? Display color profile? A
> mixture or both?
> Regards,

You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: