[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] Move flipY and asPremultipliedAlpha parameters out of DOM helpers
On Thu, May 20, 2010 at 8:53 AM, Chris Marrin <firstname.lastname@example.org>
On May 19, 2010, at 7:40 PM, Gregg Tavares wrote:
>I don't see how loading a grayscale as RGB is the same as LUMINANCE. The OpenGL texturing engine deals with LUMINANCE textures differently from RGB. The problem the information that an original image was one channel is lost in the browser implementations. Even if not (if the browsers were forced to pass along the original image format information) implementations would have to have internal format conversion to get the image into the right format. So why not expose that to the author?
> What I don't get is why the need for these conversions? If you load a grayscale image as RGB you'll get the same visual result as LUMINANCE. So it seems like the valid reason to do these conversions is for memory savings since you can always create RGB or RGBA textures that will give you the same visual result.
Can you point out the difference between an RGB image where R == G == B and a LUMINANCE texture? I'm unaware of this difference.
As far as I can tell there is no difference. There's also no difference between an RGBA image were R == G == B and a LUMINANCE_ALPHA texture.
So arguably, the only point in providing the conversions is memory savings. Otherwise there's no functional difference.
Technically that means it doesn't matter what the browser does. It can upload a grayscale image as LUMINANCE or it can upload it as expanded RGB. The WebGL program will not notice the difference. It won't even be able query the difference.
So, the only reason the expose explicit conversions is for memory savings. If that's the only reason then we should be going further. Otherwise we shouldn't be exposing the conversions at all.
The problem is that HTMLImageElement doesn't give you access to the source pixels. In a native OpenGL ES program, you'd swizzle the incoming pixel data as needed before passing to texImage2D. You don't get that opportunity with HTMLImageElement. The choices are:
1) Copy HTMLImageElement and HTMLVideoElement into a 2D canvas and then extract the ImageData from that. Swizzle the data as needed in JS. Then accept ImageData in texImage2D (which is already done).
2) Add auxiliary methods to swizzle HTMLImageElement (and friends). Put the result in an ArrayBuffer and pass that (which is already done)
3) Add texParameter() enums to do the work
4) Force the browsers to supply original source information and always convert to the original source format.
5) Do Nothing. There's no way to input a 1 or 2 channel image except through ArrayBuffer
6) Have internalformat in texImage2D for HTMLImageElement and friends.
(1) would be incredibly slow. (2) would not allow the optimizations I mentioned before and (3) makes it a general purpose addition to WebGL, which gets it further away from OpenGL ES. I don't think we want to do that. We just want a bridge between the HTML elements and WebGL. (4) would require extra work in the browsers and could be problematic as mentioned by Tim. I don't think (5) is a reasonable option, since there would really be no reasonable way to get 1 or 2 channel images into the system.
(5) seems like the best compromise between features, simplicity and implementability.