[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_texture_from_depth_video extension proposal



Are there a large number of devices which support 16-bit depth sensing but do not support a depth_texture equivalent extension? depth_texture has overwhelming support everywhere but older Android. Any GLES3 device would have it, though.

I really want to avoid putting something this hacky into the API.

> BTW: Florian, we really aren't modifying "half a dozen other 
> specifications". With WebGL it's just an extension not a modification. 
> With Canvas it is a modification and another alternative is under 
> discussion. With MediaStreams this is a modification - but this is the 
> core of our work and fits squarely in our TF's scope. The rest you 
> mention are untouched by us - typed arrays, <video>, etc.

Extensions inherently modify the effects of other specifications through their interactions with them, even if it's not explicit.

----- Original Message -----
From: "Rob Manson" <roBman@buildAR.com>
To: "Kenneth Russell" <kbr@google.com>, "Florian Bösch" <pyalot@gmail.com>
Cc: "public webgl" <public_webgl@khronos.org>, "Ningxin Hu" <ningxin.hu@intel.com>
Sent: Monday, November 10, 2014 7:24:21 PM
Subject: Re: [Public WebGL] WEBGL_texture_from_depth_video extension proposal


Hi Ken/Florian,

> It'll only be efficient to upload depth videos to WebGL textures using
> the internal format which avoids converting the depth values during
> the upload process. That's why UNSIGNED_SHORT_5_6_5 was chosen as the
> single supported format for uploading this content to WebGL 1.0. It's
> not desirable for either the browser implementer or the web developer
> to support uploading depth videos to lots of random texture formats if
> they won't be efficient. The Media Capture group should comment on
> what formats depth cameras tend to output, and are likely to output in
> the future.

At the moment we have only been able to find devices that are shipping 
or are planning to ship with 16 bit depth support. Older Kinect devices 
etc. used to support 8 bit or 11 bit depth support, but this has now 
been upgraded to 16 bits.

This makes sense if you look at what 16 bits really means and how that 
relates to the technology. If the units that these 16 bits represent are 
mm (which is the defacto standard) then the maximum distance that can be 
represented is 655.36m.

For any "Time of Flight" or "Structured Light" depth sensor this is an 
extremely long way. In fact many of these sensors don't work well in 
sunlight, so unless you have a room that is over half a kilometer wide 
or tall then needing more than 16 bits really isn't an issue 8) Not to 
mention that the sensor simply isn't that sensitive.

Even with lidar it's very unlikely that consumer depth sensors are going 
to be scanning areas that have a radius of over half a kilometer (e.g. 
1.2km across). The amount of noise and error in this scale is likely to 
be significant and resolving this will likely keep the price of this 
type of hardware out of the consumer reach for quite some time.

Of course, any technology prediction is likely to be wrong and we're 
open to other opinions and suggestions. But I think we've made a 
reasonable and evidence based assumption to build upon.


>> Since depth texture streaming interacts with various other specifications
>> >(canvas, typed arrays and webgl 1, 2, 2.1, 3, depth textures, floating point
>> >textures and <video>), it would seem to me the most consistent and easiest
>> >to use entry point for any discovering support and consistently supporting a
>> >feature would be the media capture depth specification, rather than modify
>> >half a dozen other specifications and add pieces to it.

> I agree in general. If it's possible to incorporate this sort of
> feature detection into the Media Capture spec that would be fine.
> Perhaps Rob, Ningxin or someone else from that group can comment.

We have been discussing feature detection and it seems likely that devs 
would be able to detect the pipeline they're interested in.

So if they want to use WebGL/shaders to process the depth stream they 
would detect if the WEBGL_texture_from_depth_video extension is supported.

If they want to use Canvas2D (or where we land on that) then they would 
test to see if .getDepthData() is supported.

However, we're still discussing the core capability detection for depth 
streams and are open to suggestions.

BTW: Florian, we really aren't modifying "half a dozen other 
specifications". With WebGL it's just an extension not a modification. 
With Canvas it is a modification and another alternative is under 
discussion. With MediaStreams this is a modification - but this is the 
core of our work and fits squarely in our TF's scope. The rest you 
mention are untouched by us - typed arrays, <video>, etc.


roBman

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------


-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------