Surely I am not the only person with 16bit integer textures so I am confused why this wasn't planned or at least an optional feature. If you only have a single channel there is no reason to have to pad it to a RGBA lumanance texture in OpenGL just to be able to do interop. NVidia's OpenCL driver actually support this currently but AMD doesn't (most likely because it is not mentioned in the spec). Also for those that say the vendors are just padding it anyway I have tested it on NVidias implementation and they clearly are not as the performance is considerably better using the single channel version even if I only do pixel calculations on the first channel.