Thanks for all the great comments Mark. More below...
On Sep 6, 2010, at 1:42 AM, Mark Callow wrote:
...- no need to convert PNGs because they are already linear.
This is incorrect. PNG provides gAMA, cHRM, sRGB and iCCP metadata chunks to allow the encoder to include information about the color space of the image samples. In the absence of any of these chunks in the file, the spec says
When the incoming image has unknown gamma (gAMA, sRGB, and iCCP all absent), choose a likely default gamma value, but allow the user to select a new one if the result proves too dark or too light. The default gamma can depend on other knowledge about the image, like whether it came from the Internet or from the local system.
Nowhere does it suggest that a likely default value is 1.0 (linear). If any of the above chunks do exist, the decoder is supposed to use them to display the image correctly.
But we need to make an assumption if the incoming PNG image omits any color space information. I think it is out of scope to add the ability to let the author choose the color space for these images as they are read in with texImage2D. So do we choose sRGB or linear?
Steve Baker wrote:
I think if you reverse-gamma JPEG files and leave everything else alone,
you'll be OK.
No. See above.
And some final notes...
The OpenGL sRGB extensions are rather misnamed. They only really pay attention to the transfer function (a.k.a gamma) and ignore the other parts of sRGB such as chromaticities and white & black points. Since OpenGL does not specify a color space, they don't have much choice.
When using sRGB textures, GL converts the incoming texture data to a physically linear space. When using sRGB renderbuffers, GL converts the blended & multisampled output to the perceptually-linear space of sRGB.
From my reading, incoming textures are not necessarily converted. They may be converted when accessed to preserve more of the color data. In fact, I think the spec recommends doing that to preserve one of the advantages of using sRGB images, to increase the resolution of the dark parts of an image. But I may be misreading.
I believe the correct thing to do in WebGL is specify that the canvas color space is the ICC profile connection space. The transfer function of this space is physically linear. All other aspects of the color space are also specified. For the purposes of the computations specified by OpenGL, these don't matter. But for correct conversion from the input space of the images to the output space of the display they are very important. Using the PCS enables the browser to use the relevant ICC profiles for conversion.
Yes, I believe the consensus is to use a linear color space for the drawing buffer representation. I know in the WebKit implementation we need to add the appropriate color space conversions in the HTML compositor. Now the only question is what to do with the incoming texture data.