[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] Gamma correction and texImage2D/texSubImage2D
I agree with virtually everything you said, except for two important
points, see comments below:
On Mon, Sep 6, 2010 at 4:42 AM, Mark Callow <firstname.lastname@example.org> wrote:
> I have several points which I am grouping in this single message rather
> sending a flood of new messages.
> Chris Marrin wrote:
> AFAIK, gamma correction is done to make images look right on the selected
> display. It has nothing to do with data in the source image. I believe some
> images might have color correction information in them, but that's different
> from gamma correction.
> The necessary correction most definitly has something to do with the data in
> the source image. It is dependent on that, the display and the viewing
> conditions. Judging from your later posts, I think you have already realized
> Ken Russell wrote:
> I see nothing in the JPEGImageDecoder related to gamma. Is anything
> needed for this file format? I suspect people will not use JPEGs for
> anything they expect to be passed through verbatim to WebGL, such as
> encoding non-color information in the color channels of a texture.
> There is no such thing as the JPEG file format. There are two file formats
> in common use that store JPEG compressed images: JFIF and EXIF. The JFIF
> spec. does not include color space information. However it does provide
> application tags that can be used to store this information. One example of
> using an application tag is EXIF. EXIF, the output format of the majority of
> digital still cameras, uses an application tag and in it writes a lot of
> metadata about the image including the color space information. Vrtually all
> cameras include the color space information when writing this tag.
> If JPEGImageDecoder is not doing anything related to gamma, it is incorrect.
> Just like the PNG case it should be reading the EXIF tags if present
> otherwise assuming a gamma of 2.2 is reasonable.
> Chris Marrin wrote:
> All that is a pretty clear indication that the pixels in the canvas are
> expected to be in the sRGB color space and when they are composited they are
> transformed into the display's color space. An author who really cares, can
> render textures into the WebGL canvas knowing the image is in the sRGB space
> and that the final image in the canvas should be in the sRGB space, and
> apply the appropriate factors to make that so.
> Since we don't have blend shaders the only way to do this correctly is to
> create another renderbuffer and do another pass over the data.
Actually, an app author can do this directly in their fragment shader.
Just do "gl_FragColor = pow(linear_rgb, 1.0 / 2.2);" at the end of
the shader. When/if WebGL exposes EXT_framebuffer_sRGB, the hardware
can do this more cheaply (and perhaps more exactly) using a lookup
> But since
> WebGL is already using a renderbuffer to composite the canvas with the page,
> the only approach that makes sense performance wise is for the browser to
> do the conversion while compositing the page. So the canvas to be in a
> physically linear space like the ICC profile connection space.
I admit I had to look up "ICC profile connection space". I didn't get
much clarity out of the color.org site, but Wikipedia says it's based
on either CIELAB or CIEXYZ. I'm not sure I've got it right -- are you
saying the canvas should be in CIELAB (or CIEXYZ) coordinates? I.e.
our fragment shaders need to write L, a, and b coords (or X, Y, Z
coords) instead of r, g, and b?
> Steve Baker wrote:
> * The PNG file format stores things in linear color space. If you plan
> to display them on a NON gamma corrected medium - then you need to apply
> gamma to it...which (I presume) is what that snippet of code that you
> presented actually does.
> - no need to convert PNGs because they are already linear.
> This is incorrect. PNG provides gAMA, cHRM, sRGB and iCCP metadata chunks to
> allow the encoder to include information about the color space of the image
> samples. In the absence of any of these chunks in the file, the spec says
> When the incoming image has unknown gamma (gAMA, sRGB, and iCCP all absent),
> choose a likely default gamma value, but allow the user to select a new one
> if the result proves too dark or too light. The default gamma can depend on
> other knowledge about the image, like whether it came from the Internet or
> from the local system.
> Nowhere does it suggest that a likely default value is 1.0 (linear). If any
> of the above chunks do exist, the decoder is supposed to use them to display
> the image correctly.
> Steve Baker wrote:
> I think if you reverse-gamma JPEG files and leave everything else alone,
> you'll be OK.
> No. See above.
> And some final notes...
> The OpenGL sRGB extensions are rather misnamed. They only really pay
> attention to the transfer function (a.k.a gamma) and ignore the other parts
> of sRGB such as chromaticities and white & black points. Since OpenGL does
> not specify a color space, they don't have much choice.
> When using sRGB textures, GL converts the incoming texture data to a
> physically linear space. When using sRGB renderbuffers, GL converts the
> blended & multisampled output to the perceptually-linear space of sRGB.
> I believe the correct thing to do in WebGL is specify that the canvas color
> space is the ICC profile connection space. The transfer function of this
> space is physically linear. All other aspects of the color space are also
> specified. For the purposes of the computations specified by OpenGL, these
> don't matter. But for correct conversion from the input space of the images
> to the output space of the display they are very important. Using the PCS
> enables the browser to use the relevant ICC profiles for conversion.
You are currently subscribed to email@example.com.
To unsubscribe, send an email to firstname.lastname@example.org with
the following command in the body of your email: