[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] Gamma correction and texImage2D/texSubImage2D
On Sat, Sep 4, 2010 at 2:09 PM, Steve Baker <email@example.com> wrote:
> So can we agree on this?
> 1) The WebGL color space shall be clearly defined to be a linear color
> This is essential for things like cross-platform shader code
> compatibility - and it's what all GPU's do anyway so it's no extra
Hm. I'm uneasy about this. AFAIK OpenGL doesn't mandate a color
space. By default it defines its filtering operations *as if*
everything is linear, but in practice, for framebuffer formats with
8-bit color components, the output is generally treated as sRGB. That
means the typical colorspace-ignorant app is taking sRGB input data,
filtering it as if it is linear, and putting the results in an sRGB
framebuffer to be displayed on an sRGB monitor. Status quo.
If you care about doing your lighting and filtering in true linear
space, then you need to take some special measures (i.e. figure out
how to get linear data out of textures with acceptable fidelity, and
do a final linear-to-sRGB step when outputting to a framebuffer with
8-bit components). WebGL should not diverge from OpenGL here -- the
same special measures should apply to both WebGL and OpenGL.
WebGL definitely should not do any automatic or default color space
conversion that OpenGL doesn't do. In particular, converting from
8-bit-sRGB to 8-bit-linearRGB loses fidelity, so it's not something
that should ever be done without somebody asking for it specifically.
I believe EXT_texture_sRGB is designed to make it easier to be "linear
correct" under these circumstances. It basically allows you to tell
OpenGL that your 8-bit texture data is in sRGB format, and that you
want OpenGL to an 8-bit-sRGB to higher-precision-linear-RGB conversion
when sampling the data from a shader. Your shader will operate on
linear data (in floating point format), and the frame buffer is still
your shader's problem.
There are two important things for WebGL to do:
A) Clearly specify how the output is being treated, and 2d-canvas
leads the way here. The output is sRGB. Everything after that is the
B) Implement EXT_texture_sRGB sometime soon. (IMO doesn't need to be in 1.0)
I disagree with some of the other things Steve has said or implied,
but I'll wait to elaborate until I hear what I got wrong above :)
> 2) Textures that are loaded into WebGL have an *optional* conversion
> from the color space of the image file into linear color space and where
> the color space of the file is ill-defined, it shall be assumed to be
> sRGB with a gamma of 2.2.
> This implies a need to reverse-gamma correct formats like JPEG and some
> careful reading of the PNG and GIF specifications to see how the color
> spaces of those files are described. But no matter what, we allow the
> application to disable this conversion on a per-file basis.
> 3) It is essential for WebGL applications to be able to render an image
> in linear color space and to subsequently use that image as a linear
> color space texture with no additional processing steps.
> There has to be a high-efficiency, zero-messing-around-with-my-data path
> for render-to-texture. Since we're going from linear to linear color
> spaces, that's not a tough proposition.
> 4) There is an *optional* color space conversion step when reading back
> canvas data into WebGL as a texture if the canvas is not already in a
> linear color space.
> Since WebGL canvasses and textures are always linear, this cannot (by
> definition) interfere with (3). But it may result in sRGB to linear
> conversions when reading back other kinds of canvas images...unless the
> application disables that.
> 5) Final color space conversion of a WebGL canvas to the *device* color
> space is a clearly specified *non-optional* requirement. This
> processing happens in a manner that never interferes with (3) or (4).
> Gamma correction happens in the compositor - or if we're printing the
> page. The gamma will probably be nailed at 2.2, but it could be
> something that the end user might want to adjust. For printing, this
> color space conversion might even be into CMYK - but the point is that
> the application is oblivious of this.
> 6) Steve shall endeavor not to get so outraged about such things in the
> ...and especially, to avoid upsetting Chris...sorry!
> I think that covers all the bases.
> -- Steve
> Chris Marrin wrote:
>> On Sep 3, 2010, at 9:54 PM, Cedric Vivier wrote:
>>> On Fri, Sep 3, 2010 at 23:51, Chris Marrin <firstname.lastname@example.org
>>> <mailto:email@example.com>> wrote:
>>> AFAIK, gamma correction is done to make images look right on the
>>> selected display. It has nothing to do with data in the source
>>> image. I believe some images might have color correction
>>> information in them, but that's different from gamma correction.
>>> I think this contradicts the related paragraph in the canvas 2D
>>> context spec :
>>> Canvas 2D is clearly supposed to perform gamma correction only on
>>> images that have their own color correction information, I assume
>>> WebGL should only do color/gamma correction when unpacking textures
>>> under the same rule.
>>> This would actually render the UNPACK_* parameter almost useless as
>>> it could (and probably should) be the default. If developers do not
>>> want gamma correction they just have to use images without color
>>> correction information in them (which would already be the case for
>>> any non-diffuse texture anyways).
>> I'd really like to avoid the term "gamma correction" because I don't
>> think it's correct. It's a term used to describe a color space
>> conversion used to adapt to the nonlinearities of displays. That
>> correction will happen whether we want it to or not, after we place
>> pixels into the WebGL canvas. I think Ollie's picture is correct, and
>> is the concept used by the 2D canvas.
>> You get a chance to do color space conversion of incoming images, and
>> again as the canvas is composited. I hope we are only talking about
>> the former. I don't think we should be giving the option of changing
>> how color space conversion is done in the compositor. We should simply
>> define what the color space of the WebGL canvas is. I believe we have
>> two reasonable choices for the format in the canvas: sRGB, which is
>> what the 2D Canvas uses, and linear. With sRGB, we match what the 2D
>> canvas does. But it seems like using that would cause issues when
>> combining pixels with alpha blending etc. So maybe a linear color
>> space is better.
>> Converting between linear and sRGB is easy. If the compositor expects
>> sRGB and our canvas is linear, we just need to do a gamma function to
>> convert it (apply a gamma of 2.2 according to one website).
>> I believe the default image format should match the canvas format. If
>> we choose a linear canvas then images should be linear. If the
>> incoming image is sRGB, we need to convert it. Again, going from sRGB
>> to linear is a simple conversion.
>> One final issue is what color space pixels are in when they are read
>> back, either with toDataURL() or readPixels(). This issue also appears
>> indirectly when using HTMLCanvasElement with WebGL content as the
>> source for a 2D Canvas drawImage() call.
>> It would be really nice to match what 2D does just to make all these
>> issues simpler. If the WebGL canvas is sRGB, then it composites the
>> same as 2D Canvas, toDataURL() works the same, and readPixels()
>> returns sRGB, which is what the 2D Canvas getImage() call returns.
>> Does doing that complicate the rendering?
>> firstname.lastname@example.org <mailto:email@example.com>
> You are currently subscribed to firstname.lastname@example.org.
> To unsubscribe, send an email to email@example.com with
> the following command in the body of your email:
You are currently subscribed to firstname.lastname@example.org.
To unsubscribe, send an email to email@example.com with
the following command in the body of your email: