I have a lot of dense geometry to render.
Since the geometry is already chunked in a tile-like organization, I don't need the full scale of a 32-bit float for my vertices. In some cases I could get by with a 16-bit short, or even an 8-bit unsigned byte (!). I would of course need to adjust the transformation matrices I'm using.
My questions are:
1) Is this supported at all?
2) Is it a "good idea" in general? Perhaps such usages are atypical, and are not handled in an optimized code path in the driver/GPU?