[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Gamma correction and texImage2D/texSubImage2D



So can we agree on this?

1) The WebGL color space shall be clearly defined to be a linear color
space.
This is essential for things like cross-platform shader code
compatibility - and it's what all GPU's do anyway so it's no extra
imposition.

2) Textures that are loaded into WebGL have an *optional* conversion
from the color space of the image file into linear color space and where
the color space of the file is ill-defined, it shall be assumed to be
sRGB with a gamma of 2.2.
This implies a need to reverse-gamma correct formats like JPEG and some
careful reading of the PNG and GIF specifications to see how the color
spaces of those files are described.  But no matter what, we allow the
application to disable this conversion on a per-file basis.

3) It is essential for WebGL applications to be able to render an image
in linear color space and to subsequently use that image as a linear
color space texture with no additional processing steps.
There has to be a high-efficiency, zero-messing-around-with-my-data path
for render-to-texture.  Since we're going from linear to linear color
spaces, that's not a tough proposition.

4) There is an *optional* color space conversion step when reading back
canvas data into WebGL as a texture if the canvas is not already in a
linear color space.
Since WebGL canvasses and textures are always linear, this cannot (by
definition) interfere with (3).  But it may result in sRGB to linear
conversions when reading back other kinds of canvas images...unless the
application disables that.

5) Final color space conversion of a WebGL canvas to the *device* color
space is a clearly specified *non-optional* requirement.  This
processing happens in a manner that never interferes with (3) or (4).
Gamma correction happens in the compositor - or if we're printing the
page.  The gamma will probably be nailed at 2.2, but it could be
something that the end user might want to adjust.  For printing, this
color space conversion might even be into CMYK - but the point is that
the application is oblivious of this.

6) Steve shall endeavor not to get so outraged about such things in the
future.
...and especially, to avoid upsetting Chris...sorry!

I think that covers all the bases.

  -- Steve


Chris Marrin wrote:
>
> On Sep 3, 2010, at 9:54 PM, Cedric Vivier wrote:
>
>> On Fri, Sep 3, 2010 at 23:51, Chris Marrin <cmarrin@apple.com
>> <mailto:cmarrin@apple.com>> wrote:
>>
>>     AFAIK, gamma correction is done to make images look right on the
>>     selected display. It has nothing to do with data in the source
>>     image. I believe some images might have color correction
>>     information in them, but that's different from gamma correction.
>>
>>
>> I think this contradicts the related paragraph in the canvas 2D
>> context spec :
>> http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#color-spaces-and-color-correction
>>
>> Canvas 2D is clearly supposed to perform gamma correction only on
>> images that have their own color correction information, I assume
>> WebGL should only do color/gamma correction when unpacking textures
>> under the same rule.
>> This would actually render the UNPACK_* parameter almost useless as
>> it could (and probably should) be the default. If developers do not
>> want gamma correction they just have to use images without color
>> correction information in them (which would already be the case for
>> any non-diffuse texture anyways).
>
> I'd really like to avoid the term "gamma correction" because I don't
> think it's correct. It's a term used to describe a color space
> conversion used to adapt to the nonlinearities of displays. That
> correction will happen whether we want it to or not, after we place
> pixels into the WebGL canvas. I think Ollie's picture is correct, and
> is the concept used by the 2D canvas. 
>
> You get a chance to do color space conversion of incoming images, and
> again as the canvas is composited. I hope we are only talking about
> the former. I don't think we should be giving the option of changing
> how color space conversion is done in the compositor. We should simply
> define what the color space of the WebGL canvas is. I believe we have
> two reasonable choices for the format in the canvas: sRGB, which is
> what the 2D Canvas uses, and linear. With sRGB, we match what the 2D
> canvas does. But it seems like using that would cause issues when
> combining pixels with alpha blending etc. So maybe a linear color
> space is better.
>
> Converting between linear and sRGB is easy. If the compositor expects
> sRGB and our canvas is linear, we just need to do a gamma function to
> convert it (apply a gamma of 2.2 according to one website).
>
> I believe the default image format should match the canvas format. If
> we choose a linear canvas then images should be linear. If the
> incoming image is sRGB, we need to convert it. Again, going from sRGB
> to linear is a simple conversion. 
>
> One final issue is what color space pixels are in when they are read
> back, either with toDataURL() or readPixels(). This issue also appears
> indirectly when using HTMLCanvasElement with WebGL content as the
> source for a 2D Canvas drawImage() call. 
>
> It would be really nice to match what 2D does just to make all these
> issues simpler. If the WebGL canvas is sRGB, then it composites the
> same as 2D Canvas, toDataURL() works the same, and readPixels()
> returns sRGB, which is what the 2D Canvas getImage() call returns.
> Does doing that complicate the rendering? 
>
> -----
> ~Chris
> cmarrin@apple.com <mailto:cmarrin@apple.com>
>
>
>
>

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: