[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] Ambiguity and Non-deterministicness in the WebGL Spec





On Thu, Dec 16, 2010 at 8:32 AM, Chris Marrin <cmarrin@apple.com> wrote:

On Dec 16, 2010, at 1:25 AM, Gregg Tavares (wrk) wrote:

>
>
> On Wed, Dec 15, 2010 at 5:40 PM, Chris Marrin <cmarrin@apple.com> wrote:
>
> On Dec 14, 2010, at 2:12 AM, Mark Callow wrote:
>
> >
> >
> > On 14/12/2010 18:19, Tim Johansson wrote:
> >>
> >> The problem with saying it is undefined is that it will essentially mean you have to do whatever most desktop versions are doing or the content will not work. My guess is that most desktop versions would in this case do a copy of the buffer to avoid all the issues with doDataURL, readPixels etc. In that case the spec says you can do whatever you want, but in order to be compatible with the content you have to reverse engineer what other implementations are doing and do the exact same thing. For that reason I think leaving it undefined would be a mistake.
> >>
> >> //Tim
> > I'm with Gregg. I do not understand why toDataURL is an issue as the browser must already have the pixels for the reasons Gregg has stated. Just define toDataURL to return the pixels the browser is using for compositing. If the application calls it during a render period, it will get the content from the previous frame or the initial canvas color.
>
> In the iOS implementation compositing is a system operation. When you give up control of a buffer you are giving it to a separate system process and so you lose control of that buffer. Because compositing is asynchronous, attempting to read its pixels would lead to inconsistent results. Sometimes you might see the correct values, other times that buffer might have been given to another process and its contents changed.
>
> I don't understand this. Here's some code to explain my point
>
> <html>
> <head>
> <style>
> body {
>   font-size: xx-large;
> }
> #below {
>   background-color: red;
> }
> #above {
>   position: absolute;
>   z-index: 3;
>   background-color: blue;
> }
> #mycanvas {
>   z-index: 2;
>   position: absolute;
>   left: 100px;
>   top: 0px;
> }
> </style>
> <script>
> window.> >
> function init() {
>   var canvas = document.getElementById("mycanvas");
>   var gl = canvas.getContext("experimental-webgl");
>   gl.clearColor(0.5,1,0.5,1);
>   gl.clear(gl.COLOR_BUFFER_BIT);
>
>   setInterval(moveCanvas, 1000/60);
>
>   function moveCanvas() {
>     var now = (new Date()).getTime() * 0.001;
>     var period = 3;
>     var t = (now % period) / period;
>     canvas.style.left = 100 + Math.sin(Math.PI * 2 * t) * 100;
>     canvas.style.top = 100 + Math.cos(Math.PI * 2 * t) * 100;
>   }
> }
> </script>
> </head>
> <body>
> <div id="below">below</div>
> <canvas id="mycanvas" width="32" height="32"></canvas>
> <div id="above">above</div>
> </body>
> </html>
>
> This code renders to the WebGL canvas only once. But it has to be composited constantly. The bits have to stay around forever. They are always available until the _javascript_ provides new ones which in this case it never does.  I don't see how giving a buffer to the compositor matters. Either the compositor or the browser has to keep a copy of the bits around until further notice. So bits are always available for readPixels, toDataURL, drawImage, texImage2D. The only thing that would make them go away is if WebGL starts to render again.
>
> I must still not understanding why this won't work, even on iOS.

In your case, the compositor would have the buffer and would render it whenever it needed. But we still can't get at those pixels once we do the commit to the compositor. There is no API to do it. Does that make it any clearer? You could say that the iOS API is deficient. But I would be willing to bet that Android and other mobile platforms will have the similar restrictions.

I'll stop with the iOS stuff after this, promise :-D  I just want to clarify for my own edification.

iOS can and never will be able to print?  If it can then it will have access to the pixels. Even if it means re-directing the printer to memory, printing, and grabbing the pixels from the result.

iOS can currently take a screenshot at any time. It seems like their must be some way of exploiting that to get the pixels back, even if it means writing a .PNG to memory and then loading and decompressing it.

There is some connection between _javascript_, the DOM and the object in the compositor that manages the bits since _javascript_ can move the DOM object and see the results. While it might not be able to get the bits directly, it seems like it should be possible to bind an FBO and ask that object to render. That will copy that object to the FBO at which point you can call readPixels on it to get the contents.

So, one way or another I still kind of feel like their is a way to make readPixels, toDataURL, drawImage and texImage2D to always work even in iOS when preserveDrawingBuffer is false. 

but, assuming it's going to stay the way it is it still seems like what you get from those 4 functions needs to be defined at all times. At least it seems like the spec needs to define that they do not fail. Ie, they return something the size of the canvas. They do not throw, they do not generate errors and and they do not return unexpected sizes. 

As it is, because the spec says "when compositied" vs "when JS returns control to the browser" there are places where they will continue to work on some platforms even after JS has returned control to the browser.

For example this might work:

   var g_shots = [];
   ...
   drawStuff()
   takeScreenshots(50);
   
   function takeScreenshots(numShots) {
     g_shots.push(canvas.toDataURL());
     if (numShots > 1) {
       setTimeout(takeScreenshots(numShots - 1), 0);
     }
  }

Since compositing probably only happens once every 16ms and this is set to run at 1ms or faster per callback how many get known results and how many get unknown results is undefined.

There's also the security issue. If it's not defined what those functions return is it at least defined they won't have leaks from other parts of the page? It was mentioned iOS wants to avoid the clear which might be fine as along as you can't get random stuff like partial screenshots from other element or layers.

Note: The spec also needs to mention copyTexImage2D and copyTexSubImage2D as effected functions.



 

-----
~Chris
cmarrin@apple.com




-----------------------------------------------------------
You are currently subscribed to public_webgl@khronos.org.
To unsubscribe, send an email to majordomo@khronos.org with
the following command in the body of your email: