[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Gamma correction and texImage2D/texSubImage2D

On Sep 3, 2010, at 9:54 PM, Cedric Vivier wrote:

On Fri, Sep 3, 2010 at 23:51, Chris Marrin <cmarrin@apple.com> wrote:
AFAIK, gamma correction is done to make images look right on the selected display. It has nothing to do with data in the source image. I believe some images might have color correction information in them, but that's different from gamma correction.

I think this contradicts the related paragraph in the canvas 2D context spec :

Canvas 2D is clearly supposed to perform gamma correction only on images that have their own color correction information, I assume WebGL should only do color/gamma correction when unpacking textures under the same rule.
This would actually render the UNPACK_* parameter almost useless as it could (and probably should) be the default. If developers do not want gamma correction they just have to use images without color correction information in them (which would already be the case for any non-diffuse texture anyways).

I'd really like to avoid the term "gamma correction" because I don't think it's correct. It's a term used to describe a color space conversion used to adapt to the nonlinearities of displays. That correction will happen whether we want it to or not, after we place pixels into the WebGL canvas. I think Ollie's picture is correct, and is the concept used by the 2D canvas. 

You get a chance to do color space conversion of incoming images, and again as the canvas is composited. I hope we are only talking about the former. I don't think we should be giving the option of changing how color space conversion is done in the compositor. We should simply define what the color space of the WebGL canvas is. I believe we have two reasonable choices for the format in the canvas: sRGB, which is what the 2D Canvas uses, and linear. With sRGB, we match what the 2D canvas does. But it seems like using that would cause issues when combining pixels with alpha blending etc. So maybe a linear color space is better.

Converting between linear and sRGB is easy. If the compositor expects sRGB and our canvas is linear, we just need to do a gamma function to convert it (apply a gamma of 2.2 according to one website).

I believe the default image format should match the canvas format. If we choose a linear canvas then images should be linear. If the incoming image is sRGB, we need to convert it. Again, going from sRGB to linear is a simple conversion. 

One final issue is what color space pixels are in when they are read back, either with toDataURL() or readPixels(). This issue also appears indirectly when using HTMLCanvasElement with WebGL content as the source for a 2D Canvas drawImage() call. 

It would be really nice to match what 2D does just to make all these issues simpler. If the WebGL canvas is sRGB, then it composites the same as 2D Canvas, toDataURL() works the same, and readPixels() returns sRGB, which is what the 2D Canvas getImage() call returns. Does doing that complicate the rendering?