[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] The Newly Expanded Color Space Issue
- To: Mark Callow <email@example.com>
- Subject: Re: [Public WebGL] The Newly Expanded Color Space Issue
- From: Thatcher Ulrich <firstname.lastname@example.org>
- Date: Mon, 13 Sep 2010 13:26:35 +0200
- Cc: Vladimir Vukicevic <email@example.com>, Chris Marrin <firstname.lastname@example.org>, public webgl <email@example.com>
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type; bh=JwGhU0HmwZkkJNWElNbFNVaKf2mnxWiNPjiq9Ycr5f0=; b=TMIt26+ts4SgdCBNgmHjtIldjWIHCE9V7MnUlx8hdNT7k/WjPTVGld302lzPQGUNzO /XXr55OLYSkt75zMlGWMwdaGjj3RefFOzFAICsYjjNWoSrtEiiQE7Xcai63hb/P/EMBH jLN+blUvDUvOtH2LmzmUCPJLWSTF/cNW6HvHY=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=WpmvVhETwjcR7gMUa+msg9rKfl6mgIYHuJZ43u7nNShD0V8NwIaqtwqIr7E0owb7mc MH1RiWx7hOB4EUlIrsqVpCYg3uoRLQh67cYWDghn4altv/1HOYfp6U8hpJZEUcDsVmoF Kvl6PYqkO2BVfhNRRo3ysLQZv2NpL4lwGOOpg=
- In-reply-to: <4C8DAB3B.firstname.lastname@example.org>
- References: <57565891.514173.1283977691012.JavaMail.email@example.com> <4C8DAB3B.firstname.lastname@example.org>
- Sender: email@example.com
On Mon, Sep 13, 2010 at 6:40 AM, Mark Callow <firstname.lastname@example.org>
On 09/09/2010 05:28, Vladimir Vukicevic wrote:
----- Original Message -----
Actually, I think a better option for WebGL 1.0 would be to do the simplest thing possible -- for choice 1, that would be to have a boolean flag that says "do whatevever gamma correction/colorspace conversion/etc. the browser normally does on images before uploading" or "don't touch the pixels at all". Doing anything else seems extremely painful to define, and will likely require some pretty instrusive work in the engines to get the conversions right. The piece that I think that we need for WebGL 1.0 is the "off" switch, that is ensuring that you can get raw uncorrected data from images.
The mathematics of the conversion are the same. Only the values
change. Converting from image space to linear or from image space
to sRGB instead of image space to display space won't change the
complexity of the code which, in the general case between 2 ICC
profiles, is a 4x4 matrix transform.
In the likely rough conversion browsers currently do, it is just a
case of using a different exponent in the power function.
For choice 2, the simplest option would be to define the output canvas to always be sRGB (as with CSS) and leave it at that;
The pieces of the Canvas2D and CSS spec's quoted in this thread say
- CSS colors are specified in sRGB
- Canvas2D processing shall be done in the same color space used
for CSS processing
- getImageData/putImageData return/accept colors specified in
These specs do not say that processing of the Canvas2D drawings or
web page blending must be done in an sRGB space.
no way to modify this, and to not do any conversion; just assume that the pixels that are written are sRGB. This is what the 2D canvas does, with getImageData/putImageData (with an unfortunate alpha premultiplication step).
Future extensions could add more specific conversions/getContext parameters, but I don't think they're needed in 1.0
As Steve Baker said, we need to be honest. This is a necessary step
to correctly specified extensions now and in the future. For
example, if we claim the canvas is sRGB how do we later explain the
purpose of FRAMEBUFFER_SRGB?
There's absolutely nothing dishonest about specifying the (linear) operations that OpenGL performs, and also saying the framebuffer is interpreted as sRGB. OpenGL only specifies the numeric operations that it performs; it makes no claims that the operations conform to some external physical model. Certainly, doing linear operations in sRGB space is linear in sRGB space! Some apps might prefer to work in sRGB space -- i.e. if you want to make a perceptually uniform gradient from dark to light, linear light space is the wrong space to interpolate in.
So -- maybe some developers might read the spec and somehow assume their fragment shader works in linear light space, but that would be their misunderstanding.
I completely support having some documentation that accurately explains what this all means. I.e. that in the typical default case, the components come in as sRGB and are output as sRGB, and that if you apply a linear operation to those numbers, your interpolations are linear in sRGB space but not linear in light space. Likewise, explain how texture sampling and alpha-blending are affected.
In that context, EXT_*_sRGB are perfectly explainable.