Yes I agree. I was thinking it would be convenient to have the stream in the video object so the stream can be accessed whereever the video object is passed. But the application can easily add the object with the stream interface to the video object if it wants.Hi Mark,
Thanks for putting together this sample. A few thoughts.
- Keeping the video stream separate from the video element seems cleaner. I think we should avoid APIs which require mutating HTML elements, and in particular, adding new properties.
I'd like to hear from David Sheets as he was the person primarily
pushing for the interface to be available on the producer elements
and I'm not sure I understood completely what he was proposing.
Doesn't the JS code modify the canvas as it runs? Isn't it just display or acquisition of the results that awaits a pull? Regardless I agree that acquireImage() is the place to pull the result. If nothing is drawn on the canvas until the pull request then acquireImage() may take some time.
Given the pull model, I wondered is there any point supporting
HTMLCanvasElement? The browser can probably avoid a data copy when
pulling the canvas via texImage2D especially if it is using GL for
rendering the canvas2D. But supporting texSubImage2D requires
having a copy of the data so yes I think there is a point to
I don't think there are any issues. Behavior just needs to be clearly specified then the application can suspend measurements and re-base initial counts as necessary.
- You point out that cpc won't update when the tab is backgrounded, but there are other issues: (a) the WebGL app might decide to not produce new frames sometimes (if the scene isn't updating); (b) msc won't update if the browser decides not to repaint because the page didn't update at all; (c) for backgrounded tabs it's unlikely that msc will update, and requestAnimationFrame will stop, but setTimeout based timers will still probably fire. Are there issues with any of these behaviors? I think probably not; applications will use the page visibility API to know when they've been backgrounded and suspend any measurements of frame rate.
My intention is that msc will be incremented on each screen
refresh; in CRT terms that would be each vertical blanking
interval. But perhaps that isn't necessary in the context of a web
Excellent. I did not know about this. I would like to see them tighten up the accuracy requirement though. Currently it doesn't have to any more accurate than millisecond. I think it should be as I wrote in the sample:
That's makes 1 tick about 4.17 microseconds. I think the web
audio folks would appreciate this.
Also "thousandth of a millisecond" needs to be replaced with
NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies.