[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] The Newly Expanded Color Space Issue



On Mon, Sep 13, 2010 at 6:40 AM, Mark Callow <callow_mark@hicorp.co.jp> wrote:
On 09/09/2010 05:28, Vladimir Vukicevic wrote:
----- Original Message -----
Actually, I think a better option for WebGL 1.0 would be to do the simplest thing possible -- for choice 1, that would be to have a boolean flag that says "do whatevever gamma correction/colorspace conversion/etc. the browser normally does on images before uploading" or "don't touch the pixels at all".  Doing anything else seems extremely painful to define, and will likely require some pretty instrusive work in the engines to get the conversions right. The piece that I think that we need for WebGL 1.0 is the "off" switch, that is ensuring that you can get raw uncorrected data from images.
The mathematics of the conversion are the same. Only the values change. Converting from image space to  linear or from image space to sRGB instead of image space to display space won't change the complexity of the code which, in the general case between 2 ICC profiles, is a 4x4 matrix transform.

In the likely rough conversion browsers currently do, it is just a case of using a different exponent in the power function.
For choice 2, the simplest option would be to define the output canvas to always be sRGB (as with CSS) and leave it at that; 
The pieces of the Canvas2D and CSS spec's quoted in this thread say that
  • CSS colors are specified in sRGB
  • Canvas2D processing shall be done in the same color space used for CSS processing
  • getImageData/putImageData return/accept colors specified in sRGB space
These specs do not say that processing of the Canvas2D drawings or web page blending must be done in an sRGB space.


no way to modify this, and to not do any conversion; just assume that the pixels that are written are sRGB.  This is what the 2D canvas does, with getImageData/putImageData (with an unfortunate alpha premultiplication step).

Future extensions could add more specific conversions/getContext parameters, but I don't think they're needed in 1.0

As Steve Baker said, we need to be honest. This is a necessary step to correctly specified extensions now and in the future. For example, if we claim the canvas is sRGB how do we later explain the purpose of FRAMEBUFFER_SRGB?

There's absolutely nothing dishonest about specifying the (linear) operations that OpenGL performs, and also saying the framebuffer is interpreted as sRGB.  OpenGL only specifies the numeric operations that it performs; it makes no claims that the operations conform to some external physical model.  Certainly, doing linear operations in sRGB space is linear in sRGB space!  Some apps might prefer to work in sRGB space -- i.e. if you want to make a perceptually uniform gradient from dark to light, linear light space is the wrong space to interpolate in.

So -- maybe some developers might read the spec and somehow assume their fragment shader works in linear light space, but that would be their misunderstanding.

I completely support having some documentation that accurately explains what this all means.  I.e. that in the typical default case, the components come in as sRGB and are output as sRGB, and that if you apply a linear operation to those numbers, your interpolations are linear in sRGB space but not linear in light space.  Likewise, explain how texture sampling and alpha-blending are affected.

In that context, EXT_*_sRGB are perfectly explainable.

-T