[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Public WebGL] Adding internalformat param to all texImage2d variants
On 2010-05-19 19:24, Chris Marrin wrote:
Sounds reasonable. I think we need to add a way to check the format for
that to work and not just depend on the script figuring it out, but we
will need to keep track of the format for automatic conversions too so
it should not be much more work to have a way to ask the image what it's
original format was.
On May 18, 2010, at 3:42 PM, Johannes Behr wrote:
On 18 May 2010, at 23:28, Chris Marrin wrote:
On May 18, 2010, at 1:31 PM, Johannes Behr wrote:
...In thinking about this some more, I agree with Cedric and Gregg that returning the actual format is not really necessary. If we're able to support gl.NONE (or some custom enum as Mark suggests) as an internalformat and it loads the image in the original source format, I think that satisfies the requirements of X3DOM. So I think we should eliminate getTexLevelParameter() and just add internalformat to texImage2D() and add an enum which allows us to add the image in the original format.The difference is, that we can not query anymore but must give the "internalFormat" with every texture. Correct ?
I don't think it's any different. You would always pass NO_CONVERSION (or whatever we end up calling it), which would use whatever was the input format.
But how could we know what the format was?
I think that would do the same thing that X3D does today. There's no additional X3D requirement to know what the format is, right?Again: The lighting model works different if you have e.g. L/LA or RGB/RGBA as input format.
We can easily live with everything converted to RGB(A) as long as we know what the format was
to select the right shader.
This is where things get problematic. If we provide a mechanism to convert to the desired internalformat, without the NO_CONVERT option, most authors have all the information they need. They can force the format they want, regardless of the input format, and write their shaders accordingly. It might be better to just provide the explicit conversion options (without NO_CONVERT) and then come up with a different method of discovering the image formats. You should be able to determine the basic image type from the mime-type, right? In the case of JPG, this defines the format. Is there any way to use XHR to read the header of a PNG image and read the format tag from that?
JPG is not always RGB (they rarely are, they are usually YCbCr which has
no alpha). In fact jpeg does not specify the color space at all, there
are separate headers to do that (jfif, APP0 and adobe, APP14). Grayscale
images are common, it is in theory even possible to create a jpeg with
an alpha channel, but I have never seen one :)
You are currently subscribed to firstname.lastname@example.org.
To unsubscribe, send an email to email@example.com with
the following command in the body of your email: