[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Moving float color buffer proposals to draft (was Re: Fixing the OES_texture_float mess)

On Thu, Nov 15, 2012 at 6:08 PM, Mark Callow <callow.mark@artspark.co.jp> wrote:

On 2012/11/15 17:13, Gregg Tavares (社用) wrote:

I'm not following your definition of abuse here. The way it works is format/type defines the format you want the browser to convert the image data to. internal_format (and type because it's ES 2.0) defines the format you want the texture stored internally.
No it does not. format and type tell the GL how many components of what type of data you are providing.

Exactly. You tell it a format and type you are providing. WebGL provides that type and format.

Internal format is unrelated to the type of data you're supplying which is why internal_format is not used for the image conversion

From that, ES decides how to store the data. It may truncate (8 bits to 4 or 5 bits) or enlarge (4 or 5 bits to 8 bits) the component sizes but I think most modern implementations will store them unchanged. In unextended ES 2.0 internalformat is not used. In extended ES 2.0, ES 3.0 and GL, internalformat tells the GL either the number of components (unsized formats) or the number of components and the component size at which to store the texture.

In ES if you specified

texImage2D(GL_TEXTURE2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA, GL_FLOAT, data)

where data points to unsigned byte data (which is in effect what WebGL's OES_texture_float is telling the app to do)

No it's not. format and data tell it type of data you are supplying. WebGL will supply that data. If you tell it format = GL_RGBA, type = GL_FLOAT WebGL will supply GL_RGBA, GL_FLOAT (16 bytes per color). 

then your application would most likely crash with a segmentation violation; if it did not the texture would be a complete mess. This is because the GL would be expecting to read 256*256*16 bytes of float data but would only be able to read 256*256*4 bytes of byte data.

For 2 reasons

1) because ES uses type to decide on the internal format.

internal_format = RGBA + type = GL_UNSIGNED_SHORT_4_4_4_4  = use GL_RGBA4 as the real internal format
GL_UNSIGNED_SHORT_4_4_4_4 and GL_RGBA4 are the same format.

GL_UNSIGNED_SHORT_4_4_4_4 is a *type*
GL_RGBA4 is an internal_format from desktop OpenGL.

ES infers an internal format from the type argument. 

2) because apps can break if they are expecting a certain precision

So, letting the user tell the browser how to massage the data before texImage2D is called lets the user know the exact values of the data. There are tests for this in the conformance tests.
It should have been done through internalformat and will likely have to be done that way in WebGL 2.0.

You can't leave it up to the implementation to chose a format based on the content of the image tag. That would leave the developer with no way to update the texture with texSubImage2D nor way way to manually specify mip levels it texImage2D as they'd have no idea what the format of the texture was and if the format/type doesn't match you'd get INVALID_OPERATION then calling texSubImage2D and you'd get an unrendeable texture with texImage2D with incompatible mip levels.
Again it should have been done through the internalformat parameter.

I am not arguing against the functionality only the confusing misuse of the type parameter.



注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情報の使用を固く禁じております。エラー、手違いでこのメールを受け取られましたら削除を行い配信者にご連絡をお願いいたし ます.

NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies.