Was wondering if anyone had done this before:
Connect a digital camcorder to a PC via firewire, and use OpenML to stream images from that jack to memory, then do real time image processing on the incoming imagery?
Some specific questions:
1) Does the firewire / OpenML interface support the pixel count of the camera, or will it be reduced to standard NTSC (473 or whatever lines)? I don't want to buy a high-res camera if the interface will chop it down to TV level of resolution.
2) Can I get 30 FPS interlaced with only dropping a frame or two during the buffer (assume double buffering) or is there some other latency / frame dropping to expect?
3) Can I use standard IEEE 1394 interface (or something like iLink) or will I have to get a camera with an existing OpenML driver (or write my own!)
If anyone has any experience with doing this, I'm all ears!! Any related experience, comments, thoughts are greatly appreciated.
Thanks in advance everyone. I'm trying to build a low-cost, near-real-time (10 - 30 Hz) scene recognition setup, where a digital camera stares at something and the video is piped real time to an algorithm which processes the scene and picks out things of interest, and displays the imagery with annotations to the user at video-speed. Hope that makes sense!