[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] The Newly Expanded Color Space Issue
El día 7 de septiembre de 2010 11:36, Chris Marrin <email@example.com> escribió:
> On Sep 7, 2010, at 11:14 AM, Steve Baker wrote:
>> That's is PRECISELY why I want someone to answer those three questions
>> rather than just shooting more long emails at each other.
> Sorry to disappoint you. I have neither the expertise, nor the time to become a sufficient expert to give correct answers to your questions, if any such answers even exist. And those answers are not necessary to our conversation. Please read on.
This is a belated reply at this point, but I want to get back to
Steve's questions. (He's right, but there's not much we can do about
The point is that (without extensions in OpenGL) the answers don't
exist. Regardless of what you output from a shader, fixed raster
operations like interpolation and blending are intended for a linear
color space. This implies that the output from a shader (and thus the
stored pixel values) needs to be in a linear color space if you desire
correct blending, color interpolation in pixel shader attributes.
Textures also need to be sampled linearly if you're using mips.
Most users seem ok with using input images in a non-linear space,
incorrectly operating on them with linear operations, and then using
those pixel values directly as an output. It's not mathematically
correct and I balk a little at implicitly encouraging this as good
behavior, but it can't be argued that it's widely used. This approach
requires no change to WebGL and can be followed today.
The other approach is to attempt to enable mathematically correct
behavior. Sadly, our options are limited:
1) Higher-precision linear input textures. (Not possible at this point.)
2) Store the framebuffer in sRGB. (No EXT_framebuffer_sRGB in GLES2.)
3) Convert textures to linear space at sample time. (No
4) Convert textures to linear space at upload time. (Loss of precision.)
5) Convert the linear framebuffer to sRGB before compositing.
Given these limitations, #4 and #5 are the only options we can
control. Unfortunately, it has all the banding artifacts that
Thatcher points out and unfortunately creates a correctness vs.
If we want to enable this trade-off, we'd need _both_ a packing flag
to turn non-linear input textures into linear ones as well as a
context creation flag to convert the linear framebuffer back into sRGB
before the compositor uses it. Without the latter option to change
compositor behavior (i.e. the option to consider the canvas to be in
sRGB color space) then there'd be no way to pass an sRGB texture
through WebGL unchanged without a loss of precision.
Both of these ideas have been proposed before separately upthread, but
I think we need both.
You are currently subscribed to firstname.lastname@example.org.
To unsubscribe, send an email to email@example.com with
the following command in the body of your email: