[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Adding internalformat param to all texImage2d variants



On Thu, May 20, 2010 at 10:13 AM, Cedric Vivier <cedricv@neonux.com> wrote:
> On Fri, May 21, 2010 at 00:03, Chris Marrin <cmarrin@apple.com> wrote:
>> If the original image is grayscale and the implementation is storing it as RGB, then you know you can simply choose one of the channels.
>
> Yes, and just choosing "arbitrarily" R works as well in this case.
>
>>> This is not arbitrary if you consider that the conversion table fully
>>> mirrors the behavior of samplers in ES / GL.
>>> Imho what is arbitrary is to define how grayscaling should be done, as
>>> Gregg put it earlier :
>>> "What does it mean to convert to gray scale?  Is this a gamma biased
>>> conversion? One that multiplies R G and B as appropriate for their
>>> overall contribution to luminance? Take max of R, G, B? What if my app
>>> needs it one way and yours another?"
>>
>> There has been a standard technique for converting RGB to grayscale for at least 50 years. If it's not what you want then you're free to load RGB and use the shader to do the conversion.
>
> I assume the "standard" formula you are thinking about is L = 0.3*R +
> 0.59*G + 0.11*B.
> Your argument that it can be done in the shader is exactly reverse of
> mine, if you want to use formula above you can do it in the shaders...
> On the other hand, you didn't consider the important implication that
> this technique (in opposition to just using R) makes WebGL's
> conversion incompatible with GL's conversion and sampling, whose
> strict equivalence is the only way to guarantee equivalent results in
> both software and shaders.
>
> Also, your formula would make WebGL unable to just use glTexImage2D
> directly on desktop GL (it uses the R component).
>
>
>>> I would think that WebGL doing such conversion from an RGB that is not
>>> already greyscale would be a little bit too implicit imo.
>>
>> Not at all. It's there to fill in the box in the table. As it turns out, it's extremely convenient to be able to use an RGB as a grayscale image. It allows you to use a JPG image to store a mask, for instance.
>
> You can do that already, provided that the JPG image is grayscale (in
> which case R = G = B).
> What I meant by "too implicit" is that usage of an image as a mask
> typically requires some preprocessing in the assets production
> pipeline for meaningful results. Can you point to an example where
> usage of a non-grayscale image is used as a mask in a meaningful way?
>
>
>>> I argue that if the developer asks for the image to be stored in
>>> LUMINANCE format it is his responsibility to ensure the image is
>>> prepared for such usage (e.g normal|bump maps), if he wants instead to
>>> present a greyscaled image on the screen this can be done easily and
>>> dynamically in shaders with the RGB to greyscale formula of his
>>> choosing.
>>
>> Yes, it makes sense that the author needs to understand what they're doing. But if you have a feature that converts RGB to grayscale, why not do it the right way? Are you concerned about the expense?
>
> No, I'm concerned about introducing a conversion that is not done by
> GL in general or ES samplers in particular, for all reasons above.
>
> That said I don't necessarily think that doing RGB-to-grayscale
> conversion as in formula above is a bad idea, I just think it must be
> explicit in order not to implicitly introduce all of the
> incompatibilities mentioned, e.g with a DOM_GRAYSCALE texture
> parameter boolean if we do have an extensible way to define new
> DOM-to-texture conversion options as discussed in another thread.

It seems to me that the proposed semantics are becoming very
complicated. The RGB -> grayscale conversion operation seems like it
would do more computation than a typical OpenGL entry point.
Supporting all of the conversions discussed in
http://neonux.com/webgl/conversions.html sounds like a lot of code to
add to implementations. As Gregg points out, some important use cases
like the ability to request a lower-precision internal format are not
met by simply adding the OpenGL ES internalformat parameter for these
entry points.

It may not be apparent, but there is a lot of work left to do to
implement the functionality and semantics in the current WebGL spec.
It is known that there will be issues with version 1.0 of the spec;
for example, lack of compressed texture support. However, I think that
it is better to ship a working WebGL 1.0 and begin to extend it into
WebGL 1.1 (and to think about exactly how developers will cope with
that upgrade process) than to hold up version 1.0 of the spec
indefinitely.

I suggest that the best path forward might be to leave the signatures
of the texImage2D and texSubImage2D helpers taking DOM elements as
they are currently. Not adding an internalformat argument would make
it easier to add overloads later which have either this argument or
both internalformat and type arguments. We can tighten up the
specification of the current entry points to guarantee their behavior
across browsers.

What are your thoughts?

-Ken

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: