Our first idea was simply to use RGBA/UNSIGNED_SHORT_4_4_4_4 but the initial feedback we got (which made good sense to us) was that a 3 channel data structure like RGB/UNSIGNED_BYTE would save a number of steps in the conversion/unpacking process and would therefore be more efficient.
We created some tests and ran them across various browsers and during that process also realised that we could get better performance and channel usage by using RGB/UNSIGNED_SHORT_5_6_5 - already a unit16 array and only 3 channels instead of 4.
As far as I'm aware using LUMINANCE_ALPHA/UNSIGNED_BYTE is not as effective as the LUMINANCE_ALPHA is internally converted to an RGBA with the luminance spread across the R, G and B channels. But please correct me if this is not right?
Our primary goal here was simply to find a pragmatic approach that could easily be developed against WebGL 1.x right now - so people could start using Depth Camera Streams via WebGL in the "very near future".
Then in WebGL 2.x we could just move to using RED_INTEGER so we would then move back to using just standard WebGL with no extension. But waiting for only this option seemed like a long delay when a feasible stop-gap approach seemed available.
If there are other options that could help us meet our primary goal then we'd definitely like to hear about it.
Yet it does seem to me that the WEBGL_texture_from_depth_video extension does define a very minimal "novel behavior of a piece of software". At the moment there is no way to upload a <video> frame that includes a depth data track. This extension enables that - which makes this novel.
Florian, it sounds like you're saying that our only option is that users/devs really have to wait for WebGL 2.x to access Depth Camera Tracks - am I understanding that correctly?