[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] Gamma correction and texImage2D/texSubImage2D
Chris Marrin wrote:
> First of all let me say a couple of things:
> 1) Steve, how do you REALLY feel about gamma?
> 2) Ken, (regarding the question of whether anyone on the list actually cares about gamma), told you so.
> Now on to the topic at hand. First let's try to narrow the scope of this discussion. We're not talking about printers or anything else. We're talking about rendering imagery into a WebGL canvas for later compositing with the rest of the page.
Why aren't you talking about printing? People print web pages all the
time - that will (in future) include WebGL canvasses - and since the
gamma of a printer is WAY different to that of a screen - you can't
> I think we should take our lead from what the 2D Canvas says. Section 18.104.22.168 of http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html talks about this. I find it hard to read, but I believe it says that the pixels stored in the canvas are in the color space of the canvas, which is sRGB. So if you read back those pixels using getImageData(), they will be in the sRGB color space. And when you call toDataURL(), the encoded pixels values will be the same as those returned in getImageData().
Yeah - that's quite some incomprehensible piece of writing!
Let's break apart the "color spaces and color correction" section
APIs must perform color correction at only two points: when
rendering images with their own gamma correction and color space
information onto the canvas, to convert the image to the color space
used by the canvas (e.g. using the 2D Context's |drawImage()
method with an |HTMLImageElement
object), and when rendering the actual canvas bitmap to the output
So two kinds of correction:
1) "images with their own gamma correction" (ie JPEG) - convert the
image to the color space of the canvas.
2) "...when rendering the actual canvas bitmap to the output device"
This is PRECISELY what I'm asking for here. If we say that our WebGL
canvas is in linear space then (1) Says convert gamma-corrected JPEGs to
the color space of the canvas (which is linear - so we can do lighting,
etc) - no need to convert PNGs because they are already linear. (2)
Says we do gamma correction in the compositor.
Perfect! Exactly what I've been telling everyone we should do! Hooray!
Then it says:
/In user agents that support CSS, the color space used by a |canvas
element must match the color space used for processing any colors
for that element in CSS./
So this says that the CSS system can impose some other color space on
the canvas? It's not really clear what that means...but if somehow CSS
told the canvas to be a gamma-space canvas - then you'd have to
preconvert PNG's into gamma space then NOT do gamma correction in the
But I don't see how CSS imposes itself on WebGL. Colors from CSS for
our rendering surface surely aren't relevent?
That leaves us free to choose our color space - and because we're not
totally insane - we pick "linear".
Then it concludes says:
/The gamma correction and color space information of images must be
handled in such a way that an image rendered directly using an |img
element would use the same colors as one painted on a |canvas
element that is then itself rendered.
That's fine - if you take PNG, and DO NOT gamma correct it - do a
straightforward linear-space rendering and then gamma-correct the
output, you get exactly the right answer. If you take a JPEG,
reverse-gamma correct it, do a linear-space rendering and then
gamma-correct the output, you get (barring roundoff issues) the right thing.
/Furthermore, the rendering of images that have no color correction
information (such as those returned by the |toDataURL()
method) must be rendered with no color correction.
That I don't understand....?
> In fact, the 2D Canvas spec doesn't really speak in terms of gamma correction at all. It speaks in terms of color spaces.
True - but there are (typically) only two color spaces that we care
about...gamma and linear. Sure, you might be dealing with printers and
have a CMYK color space canvas...that would be kinda silly though.
> It says that the color space can be transformed in exactly 2 places: 1) going from whatever the incoming image's color space is to sRGB for insertion into the canvas, and 2) going from sRGB in the canvas to whatever the color space of the display happens to be.
> I think gamma correction is just a detail of the display's color space, so we probably shouldn't even be using that term. I think it would be better if we simply say whether we want an image to be in the sRGB color space in texture memory, or unchanged from the original image. We should speak in terms of the original image's color space, because there are image formats which specify it.
We take the color space of the image - we convert it to whatever our
canvas needs then we convert that to whatever the display needs at the
output. That's what I've been proposing all along - and that seems to
be what the canvas spec says. The color space of the WebGL canvas
needs to be linear RGB because our hardware can't process anything else
correctly - and that would break the canvas specification. The images
could be in who-knows-what space - but we should correct them to linear
for OUR canvas. Then we convert our canvas into who-knows-what that the
display may need.
In practical terms - JPEG gets reverse-gamma'd - PNGs are left alone.
We gamma correct for the display in the compositor - if the output is a
screen. If the output is a printer then we apply different gamma - or
we convert to CMYK space or whatever.
> All that is a pretty clear indication that the pixels in the canvas are expected to be in the sRGB color space and when they are composited they are transformed into the display's color space. An author who really cares, can render textures into the WebGL canvas knowing the image is in the sRGB space and that the final image in the canvas should be in the sRGB space, and apply the appropriate factors to make that so.
But our hardware can't process sRGB. So that's a complete non-starter -
but fortunately, the canvas spec allows us to choose the color space of
our canvas providing we convert on input (where necessary...ie JPEGS) -
and providing we gamma correct on the output (which we MUST do in order
to make things we render like lighting etc compatible with the color
space we have to use).
> So my proposal is to call the flag something like IMAGE_COLORSPACE_WEBGL with the values IMAGE_COLORSPACE_SRGB_WEBGL and IMAGE_COLORSPACE_RAW_WEBGL. I think using enumerations make it the most clear. And given the argument above, I think the default should clearly be IMAGE_COLORSPACE_SRGB_WEBGL. If the author is dealing with textures as images (as opposed to some other type of data, like normal maps or floats) then all you have to know is the source and destination color spaces and you can make the proper calculations.
> As far as giving the ability to control the compositing of the output (like we do for premultiplied alpha), I don't think we need to. We just need to say that the pixels in the drawing buffer have to be sRGB.
...and thereby make every single GPU on the planet non-compliant. Good
GPU's can't even read sRGB textures without going to GL_NEAREST
filtering because GL_LINEAR (et al) requires non-linear blending
operations that absolutely nobody implements.
You are currently subscribed to firstname.lastname@example.org.
To unsubscribe, send an email to email@example.com with
the following command in the body of your email: