[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Gamma correction and texImage2D/texSubImage2D



First of all let me say a couple of things:

1) Steve, how do you REALLY feel about gamma?

2) Ken, (regarding the question of whether anyone on the list actually cares about gamma), told you so.

:-)

Now on to the topic at hand. First let's try to narrow the scope of this discussion. We're not talking about printers or anything else. We're talking about rendering imagery into a WebGL canvas for later compositing with the rest of the page. 

I think we should take our lead from what the 2D Canvas says. Section 4.8.11.2 of http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html talks about this. I find it hard to read, but I believe it says that the pixels stored in the canvas are in the color space of the canvas, which is sRGB. So if you read back those pixels using getImageData(), they will be in the sRGB color space. And when you call toDataURL(), the encoded pixels values will be the same as those returned in getImageData(). 

In fact, the 2D Canvas spec doesn't really speak in terms of gamma correction at all. It speaks in terms of color spaces. It says that the color space can be transformed in exactly 2 places: 1) going from whatever the incoming image's color space is to sRGB for insertion into the canvas, and 2) going from sRGB in the canvas to whatever the color space of the display happens to be. I think gamma correction is just a detail of the display's color space, so we probably shouldn't even be using that term. I think it would be better if we simply say whether we want an image to be in the sRGB color space in texture memory, or unchanged from the original image. We should speak in terms of the original image's color space, because there are image formats which specify it.

All that is a pretty clear indication that the pixels in the canvas are expected to be in the sRGB color space and when they are composited they are transformed into the display's color space. An author who really cares, can render textures into the WebGL canvas knowing the image is in the sRGB space and that the final image in the canvas should be in the sRGB space, and apply the appropriate factors to make that so. 

So my proposal is to call the flag something like IMAGE_COLORSPACE_WEBGL with the values IMAGE_COLORSPACE_SRGB_WEBGL and IMAGE_COLORSPACE_RAW_WEBGL. I think using enumerations make it the most clear. And given the argument above, I think the default should clearly be IMAGE_COLORSPACE_SRGB_WEBGL. If the author is dealing with textures as images (as opposed to some other type of data, like normal maps or floats) then all you have to know is the source and destination color spaces and you can make the proper calculations. 

As far as giving the ability to control the compositing of the output (like we do for premultiplied alpha), I don't think we need to. We just need to say that the pixels in the drawing buffer have to be sRGB.

We can see if I have been convincing by the number of expletives in Steve's response :-)

-----
~Chris
cmarrin@apple.com




-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: