I modified the vidtogfx.c example so that it works under Linux and displays video. I am now trying to map the live video onto OpenGL polygons. The command in the vidtogfx.c example that draws the 2D image onto the screen is:
According to the OpenGL Programming Guide, 3rd Edition, p. 362, texture data is in the same format as the data used by glDrawPixels. On p. 367, the Guide discusses mapping video onto GL textures using the glTexSubImage2D call. I tried this using the following call:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0,
imageWidth, imageHeight, GL_RGB,
The 3 0's are for level, xoffset, and yoffset.
I tested this on code that had successfully mapped static textures to polygons, so the missing part seems to be the transfer from OpenML to OpenGL. Are the ML buffers in a slightly different format compared to OpenGL textures? From mlquery, my cards ML_VIDEO_COLORSPACE_INT32 setting is ML_COLORSPACE_CbYCr_601_HEAD and the ML_IMAGE_COLORSPACE_INT32 setting is ML_COLORSPACE_RGB_601_FULL.