Obj-coordinates to pixel-coordinates: wich steps?
Hi to all,
i hope someone can help me with my problem.
Suppose that i have a transformation matrix R, a vector T and that i use R and T to model the object-> eye transformation of a point U defined in object space:
V = R*U+T, where V is in eye coordinates
Suppose that i have four real number Fx,Fy,Cx,Cy. They represents the intrinsic videocamera parameters:
-Fx = focal length * dimension of pixel in horizontal direction
-Fy = focal length * dimension of pixel in vertical direction
-Cx,Cy = coordinates of the principal point
R and T are computed solving the camera pose estimation problem.
In order to obtain the pixel coordinates and render a cube (the cube is rendered (or "registered") perfectly on a planar rectangular target viewed with the videocamera; i print the target on simple hard paper), i do the following:
x = -Fx*(V_x/V_z)+Cx
y = -Fy*(V_y/V_z)+Cy
I then use custom made routines to draw lines in order to render a wire-frame cube. The rendering is perfect. The cube seems to stay precisely on the planar target and it moves around in the right way.
Now i would like to switch to opengl es rendering, but i am facing a lot of problems.
First, the projection matrix. I have studied the opengl es coordinate transformation pipeline in order to understand how should i setup the projection matrix. To obtain the equation (1), i do the following:
the frustum planes are defined as:
the screen width is w, the screen height is h. The viewport
is set at (0,0) of width w and height h. The center of the
viewport is (w/2,h/2)=(ox,oy).
I setup the projection matrix in the standard way, using n=0.1 and f=1000.0 and the values above:
glViewport( 0, 0, 320,240);
glMatrixMode( GL_PROJECTION );
float near, far;
near = 0.1;
far = 1000.0;
memset(intrinsic, 0, sizeof(intrinsic));
float rml = (320.0*near)/Fx;
float tmb = (240.0*near)/Fy;
float rpl = (1.0-(2.0*Cx)/320.0)*((320.0*near)/Fx);
float tpb = (1.0-(2.0*Cy)/240.0)*((240.0*near)/Fy);
intrinsic = (2.0*near)/rml;
intrinsic = (2.0*near)/tmb;
intrinsic = rpl/rml;
intrinsic = tpb/tmb;
intrinsic = - (far + near)/(far - near);
intrinsic = -1.0;
intrinsic = - (2.0 * far * near)/(far - near);
After all the tranformation, the pixel coordinates should be exactly (or at least very near to) the same as those that obtained with equation (1).
But they are not!!! The cube is rendered wrongly. Only if i see, with the videocamera, the planar target from the top they overlap correctly. I load the modelview matrix as follows:
memset(glR, 0, sizeof(glR));
glR = T;
glR = T;
glR = T;
for (int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++)
glR[j*4+i] = (R[i][j] * S[i]);
When i look at the planar target from the above (perpendicular to it), the translation in the y coordinate is wrong (if i move up the phone, the cube move up, not down). The rotation part is totally wrong. If i set glR = -T, i obtain nearly a correct translation, but the cube do not properly overlap with the cube rendered with my own routines (it is rendered slightly upper than the other).
I really need your help!!!!
The cube i render with my own code is defined as having the bottom side at z = 0,
the top side at z=-1: the two side are defined as