[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Proposed change to WebGL spec section 4.2 (Security Origin Restrictions)



I would just add that there are plenty of other reasons to want to read
back the frame buffer.  Brian's case is a common one - but there are
lots of others.

* WebGL does not (yet) support occlusion testing - and many of the
hardware implementations of OpenGL-ES don't support smart Z skip-over -
so occlusion culling can produce big performance wins.   All of the many
applications of occlusion testing have to be replaced by reading back
the frame buffer.  Since occlusion culling is an exceedingly powerful
speed-up technique for low end systems - it's likely to be needed in
phone applications.  On desktop systems, reading back pixels is
something we strive to avoid because it's so slow - but on low end
machines, the frame buffer memory is likely to be a part of main memory
- and that makes it fast enough to be usable again.

* In some of the work I do in my paying job (military simulation - which
is not WebGL-based **yet** but easily could become so) we do things like
analysing the content of the frame buffer using statistical or image
recognition techniques which mimic what real world weapon systems and
sensors do.  Doing this in shader code is tricky - but even when you
can, you generally need to read back the results at some point.

* I have built some proof-of-concept demo's showing massively parallel
physics calculations and collision detection being done in a standard
OpenGL shader (not using CUDA) - and that too typically requires some
read-back from the frame buffer at the end of the process.  Doing game
physics using JavaScript is going to be horrifically slow - so we're
likely to want to use the GPU whenever possible.

* Ray-casting for picking really only works well when the objects are
solid, opaque kinds of things.  When you have an alpha texture cutting
out the shape of (say) a tree, doing ray casting tells you that the user
clicked on the tree polygon - when in fact he was clicking on the object
behind the tree - through one of the large transparent bits.  The ray
caster could (in principle) simply look at the alpha plane of the
texture - but with increasing shader sophistication, being able to
figure out the transparency of a particular point on a triangle rapidly
veers towards impossibility.

As for issues of 'tainted' images - I have no clue - but for sure we
need to be able to read back pixels from the frame buffer in a wide
variety of applications.

  -- Steve


Kenneth Russell wrote:
> On Tue, Oct 5, 2010 at 1:42 PM, Vladimir Vukicevic <vladimir@mozilla.com> wrote:
>   
>> ----- Original Message -----
>>
>>     
>>> My argument as to them being orthogonal is that the canvas spec having
>>> something similar would not affect WebGL. Imagine that Vlad's
>>> suggestion were implemented in the canvas spec today, would that
>>> somehow make it such that readPixels and tainted images could be used
>>> in the same WebGL context? I don't see how. So the WebGL spec would
>>> have to be changed in the exact same way whether or not it is adopted
>>> for 2d canvas.
>>>       
>> Note that when I (and I believe) Chris says "at the Canvas level", it doesn't mean at the canvas 2D context level -- but actually at the core level of the <canvas> element, regardless of the underlying contexts.  Canvas doesn't really have much language to say there, but I think the spec could be extended to define what it considers origin clean/origin dirty in a more fine grained way.  I wouldn't want to add very detailed descriptions of that into the WebGL spec, especially at this point; doing the tracking of the various pieces is certainly possible, but it's not code that I would want to write right now for 1.0.
>>     
>
> While I have to agree that I wouldn't want to write the code to
> perform the more fine-grained security tracking for WebGL 1.0 at this
> point (nor really the spec text), the question we need to ask is
> whether we are excluding a large class of interactive applications by
> not doing so. As Gregg mentioned, it's a common technique to implement
> picking in OpenGL by drawing each triangle in a separate color and
> reading back the pixel under the mouse pointer.
>
> The application Brian works on is a real-world, non-game use case
> involving a lot of data that it would be really sub-optimal to have to
> replicate across multiple WebGL contexts.
>
> A few questions for Brian:
>
> 1. Is it not feasible to serve your web pages and images from the same
> domain? What about using a proxy?
>
> 2. Do you perform per-pixel tests and discards in your fragment shader
> implying that you simply can not do ray casting at the application
> level and get the same results you are getting with your current
> picking technique?
>
> 3. Would it be possible for you to perform ray casting in your
> application against a subset of your data set to achieve similar
> picking results?
>
>   
>> So, if the canvas (element) spec were to be extended to describe origin-clean resources and take into account CORS, etc., I don't see any reason why a WebGL implementation couldn't start following the tighter definitions in the future.
>>     
>
> I think we should ignore the question of CORS for the moment and focus
> on the more fine-grained, WebGL-specific tainting.
>
> -Ken
>
> -----------------------------------------------------------
> You are currently subscribed to public_webgl@khronos.org.
> To unsubscribe, send an email to majordomo@khronos.org with
> the following command in the body of your email:
>
>   

-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: