[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Moving float color buffer proposals to draft (was Re: Fixing the OES_texture_float mess)



On Mon, Nov 19, 2012 at 4:24 AM, Mark Callow <callow.mark@artspark.co.jp> wrote:
The <type> parameter is not superfluous. It is the only way the OpenGL implementation knows what type of data the application is providing. For the integer case you cited, there are several integer formats that could be used to supply that data. They are not disambiguated by <format>.
Right, but these are fringe use cases (99% of the time its just vanilla bytes).
 
For ES 2.0, unfortunately, <format> and <type> do not specify the internal format completely. The implementation has freedom to choose the number of bits at which it stores each component.

The data absolutely has the match the type. If it does not, the application is likely to crash. If it does not crash, the texture will be nonsense.

Never said otherwise.
 
As I said when I started this thread (and again above), if you try that in native ES the app will most likely crash. The app has to first convert the data from unsigned byte array to FLOAT.
Exactly.
 
OES_required_internal_format.
This does not deal with floating point textures, just with variations of integer formats and some special formats for depth.

As far as I know the entire body of the OpenGL ES 2.0 specification AND extensions contains nowhere anything else than that to get a floating point textures which you cannot create from a byte array, you have to pass internalformat=RGBA, type=FLOAT, format=RGBA. So shouldn't this be what WebGL should do, and isn't this what WebGL already does?