Normalized Integer
A Normalized Integer is an integer which is used to store a decimal floating point number. When formats use such an integer, OpenGL will automatically convert them to/from floating point values as needed. This allows normalized integers to be treated equivalently with floating-point values, acting as a form of compression.
For example, if a 2D Texture's Image Format uses normalized integers, it will still be treated as a floating-point texture. The sampler type the shader uses will be sampler2D, just like for a floating-point texture. If you use this image in the framebuffer and write to it from the Fragment Shader, the output variables will be floating-point vectors, not integer ones.
The downside to integer normalization is that they can only represent floating-point values on the range [0.0, 1.0] or [-1.0, 1.0], depending on whether they are unsigned or signed integers. This is sufficient in many cases for colors, but it can also be used for some vertex inputs like texture coordinates and normals.
Storage and bitdepths
Every normalized integer has some bitdpeth. These are usually 8 or 16, but some normalized integers use unusual numbers like 2, 10, or even 32. Regardless of the bitdpeth, the way they are converted is identical. Only the specific numbers change.
In all of the following equations, the bitdepth will be represented by B.
Unsigned
For unsigned, normalized integers, the conversion is fairly simple. For a given integer of bitdepth B, the maximum representable unsigned integer is
.Unsigned, normalized integers map into the floating-point range [0, 1.0].
Signed
The conversion