[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Public WebGL] WEBGL_dynamic_texture redux



Sorry for my long delay replying. Thanks for the feedback. Answers in-line.

On 2012/11/05 10:16, David Sheets wrote:

  //
  // connectVideo
  //
  // Connect video from the passed HTMLVideoElement to the texture
  // currently bound to TEXTURE_EXTERNAL_OES on the active texture
  // unit.
  //
  // First a wdtStream object is created with its consumer set to
  // the texture. Once the video is loaded, it is set as the
  // producer. This could potentially fail, depending on the
  // video format.
  //
  // interface wdtStream {
  //   enum state {
  //     // Consumer connected; waiting for producer to connect
  //     wdtStreamConnecting,
  //     // Producer & consumer connected. No frames yet.
  //     wdtStreamEmpty,
  //     wdtStreamNewFrameAvailable, // when does this state occur?
  //     wdtStreamOldFrameAvailable, // when does this state occur?

NewFrameAvailable occurs when the producer puts a new frame into the stream transitioning from either Empty or OldFrameAvailable. The state becomes OldFrameAvailable after acquireImage, when the previous state was NewFrameAvailable.

  //     wdtStreamDisconnected
  //   };
  //   // Time taken from acquireImage to posting drawing buffer; default 0? // units? microseconds?

Yes.

  //   readonly int consumerLatency;
  //   // Frame # (aka Media Stream Count) of most recently inserted frame
  //   // Value is 1 at first frame.
  //   readonly int producerFrame;
  //   // MSC of most recently acquired frame.
  //   readonly int consumerFrame;
  //   // timeout for acquireImage; default 0
  //   int acquireTimeout; // units? microseconds?

Yes.

  //   // readonly int freq; ? do videos get one per channel? only max frequency of all media streams?

What would you use this for? Without knowing the use case, the obvious answer is the frequency (framerate) of the producer video stream. However several modern video formats do not have a fixed framerate.

  //   void setConsumerLatency(int); // linear with consumerLatency? >0? clamped? how does it exert backpressure on source?

I don't understand the first question. Yes >0. I think it will be clamped to some maximum that is either set by the source or specified by the app, in both cases when the stream is created.

The purpose of this is to fine tune the sync with audio. If the latency decreases, the source would have to repeat a frame or frames until the frame at the front of the queue is the one corresponding to audio time <at> where <at> = t + consumerLatency. If the latency increases, the source will have to skip a frame or frames in order to build up the latency.

  // };
  //
 
  //
  function connectVideo(ctx, video) // Whose function is this? Your internal implementation or an abstraction of the proposed interface?

The application's.

  {
    g.loadingFiles.push(video);
    g.videoReady = false;
    // What is g? gl? seems related to the stream but videoReady is racy

g is an object that contains the application's global variables to avoid polluting the global namespace. Why is it racy? It's only set in the onload function and never cleared.

    //-----------------------------
    // Options for connecting to video
    //-----------------------------
    // OPTION 1: method on WDT extension augments video element
    // with a wdtStream object.
    ctx.dte.createStream(video); // What property of video makes this possible? Is that property part of this specification?

There is no exposed property of video that makes creating the stream possible. But this option causes a wdtStream property on the video to be defined. If you have a suggestion about (a) what property would be useful and (b) how I can spec it in a WebGL extension and not the HTMLVideoElement specification, I am happy to listen.

    assert(video.wdtStream.state == wdtStreamConnecting);
    //-----------------------------
    // OPTION 2: method returns a stream object.
    g.vstream = ctx.dte.createStream(ctx /* video? empty? */); // Q.vstream = ctx.dte.createStream(video); assert(Q.vstream == g.vstream); ?

Empty. The video has not been loaded yet.

    assert(g.vstream.state == wdtStreamConnecting);
    //-----------------------------

    video. color="#008080">function
() {
        g.loadingFiles.splice(g.loadingFiles.indexOf(video), 1);
        try {
          // OPTION 1: video object augmented with stream
          video.wdtStream.connect(); // If the stream is _part_ of video this hardly seems useful to consumers.

Yes I suppose the video element knows when it is loaded and could automatically connect itself up to the stream.

          assert(video.wdtStream.state == wdtStreamEmpty);
          //-----------------------------
          // OPTION 2: separate stream object
          g.vstream.connectProducer(video); 
// What property of video makes this possible? Is that property part of this specification?

See above.

          assert(g.stream.state == wdtStreamEmpty);
          //------------------------------
          if (!video.autoplay) { // is this inverted? NOT autoplay -> play?
            video.play(); // Play video
          }

If autoplay is set, the video should start playing without any help from the application. It is the application's choice here to start playing the video once loaded.

          g.videoReady = true; // do you mean if g.loadingFiles is length 0?

Since there is only one video in this example, it doesn't matter but the flag should be per video. The application is using it to see whether to call {acquire,release}Image or not.

        } catch (e) {
          window.alert("Video texture setup failed: " + e.name);
        }
      };
  }

  function drawFrame(gl)
  {
    var lastFrame;
    var syncValues;
    var latency;
    var graphicsMSCBase;

    // Make sure the canvas /* buffer? */ is sized correctly.

Yes the buffer.

    reshape(gl);

    // Clear the canvas
    gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

    // Matrix set-up deleted ...

    // To avoid duplicating everything below for each option, use a
    // temporary variable. This will not be necessary in the final
    // code.
    // OPTION 1: augmented video object
    var vstream = g.video.wdtStream;
    // OPTION 2: separate stream object
    var vstream = g.vstream;

    // In the following
    //   UST is a monotonically increasing counter never adjusted by NTP etc.
    //   The unit is nanoseconds but the frequency of update will vary from
    //   system to system. The average frequency at which the counter is
    //   updated should be 5x the highest MSC frequency supported. For
    //   example if highest MSC is 48kHz (audio) the update frequency
    //   should be 240kHz. Most OSes have this kind of counter available.
    //
    //   MSC is the media stream count. It is incremented once/sample; for
    //   video that means once/frame, for audio once/sample. For graphics,
    //   it is incremented once/screen refresh. // good! on most machines, when I time a rendercycle, I get 60-75 Hz?
    //
    //   CPC is the canvas presentation count. It is incremented once
    //   each time the canvas is presented. // this is totally detached from time, yes?
    //

Yes.

    
    if (graphicsMSCBase == undefined{
        graphicsMSCBase = gl.dte.getSyncValues().msc;
    }

    if (lastFrame.msc && vstream.producerFrame > lastFrame.msc + 1) {
      // Missed a frame! Simplify rendering?
    }

    if (!latency.frameCount) {
      // Initialize
      latency.frameCount = 0;
      latency.accumTotal = 0;
    }

    if (lastFrame.ust) {
      syncValues = gl.dte.getSyncValues();
      // interface syncValues {
      //     // UST of last present
      //     readonly attribute long long ust;
      //     // Screen refresh count (aka MSC) at last present
      //     // Initialized to 0 on browser start
      //     readonly attribute long msc;
      //     // Canvas presentation count at last present
      //     // Initialized to 0 at canvas creation.
      //     readonly attribute long cpc;
      // };
      // XXX What happens to cpc when switch to another tab?
      if (syncValues.msc - graphicsMSCBase != syncValues.cpc) // this assumes the media rates are locked to the rendering rates

No. Read the comment below. This relates only to whether the 3D rendering is keeping up with the screen refresh.

        // We are not keeping up with screen refresh!
        // Or are we? If cpc increment stops when canvas hidden,
        // will need some way to know canvas was hidden so app
        // won't just assume its not keeping up and therefore
        // adjust its rendering.
        graphicsMSCBase = syncValues.msc; // reset base.
      }
      latency.accumValue += syncValues.ust - lastFrame.ust;
      latency.frameCount++;
      if (latency.frameCount == 30) { // is this 30 the fps of the encoded video? can it be retrieved from the stream source somehow?

No. It is just the number of frames I picked to over which to average the latencies. I'm not sure there is any advantage to picking a number based on the fps of the source.

        vstream.setConsumerLatency(latency.accumValue / 30);
        latency.frameCount = 0;
        latency.accumValue = 0;    
      }
    }  

    if (g.videoReady) {
      if (g.video.wdtStream.acquireImage()) {
        // Record UST of frame acquisition.
        // No such system function in JS so it is added to extension.
        lastFrame.ust = gl.dte.ustnow();
        lastFrame.msc = vstream.consumerFrame;
      }
      // OPTION 2:
      vstream.acquireImage();
      lastFrame = g.stream.consumerFrame; // lastFrame.msc = ...

Yes.

    }

    // Draw the cube
    gl.drawElements(gl.TRIANGLES, g.box.numIndices, gl.UNSIGNED_BYTE, 0);

    if (g.videoReady)
      vtream.releaseImage();

    // Show the framerate
    framerate.snapshot();

    currentAngle += incAngle;
    if (currentAngle > 360)
        currentAngle -= 360;
  }



How many streams may exist for a given media source? If multiple, do they communicate amongst themselves and buffer frames for sharing? If yes, this suggests that streams have a source separate from its sinks. This source must have a property that tracks the maximum consumer latency.
Keep it simple. 1 only I think.

What type of object may be used to generate a stream source?
Do you mean what kind of object can be a stream source?  HTMLVideoElement, etc.

Some media sources (cameras, live streams), cannot seek into the future. How does an application with multiple sinks attached to these sources synchronize those outputs? Setting all consumer latencies equally?
In those cases I think the consumerLatency will be clamped to zero and you will have to put up with poor audio synchronization. The only other option is to give the application control over the audio stream so it can specify a delay when the stream starts. It would not be able to adjust that delay without introducing audio glitches so it could only make a best guess.

Can streams be concatenated? Is the result a stream? I don't think this should be part of the API but I think it should be possible to build on top of the API.
Do you mean have one stream be the source for another stream? What is the use case?

Is it possible to construct a stream object from a WebGL renderbuffer? Can a developer do this or must the browser implementor be involved? What is the minimal set of interfaces that is required to give developers this kind of flexibility?
I don't see the point of introducing this added complexity. I can't think of anything you could do with this that you can't accomplish with an FBO unless streams start to be used by other parts of the web platform. In terms of the underlying EGLStream, I don't think there is currently any extension supporting using a renderbuffer as a producer, only an EGLSurface. If there is hardware support for this, the browser would have to provide a function to connect the stream to the renderbuffer as a producer. 

Regards

    -Mark

--
注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情報の使用を固く禁じております。エラー、手違いでこのメールを受け取られましたら削除を行い配信者にご連絡をお願いいたし ます.

NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies.