# Difference between revisions of "Viewing and Transformations"

##### How does the camera work in OpenGL?

As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0.0, 0.0, 0.0). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation by placing it on the MODELVIEW matrix. This is commonly referred to as the viewing transformation.

In practice this is mathematically equivalent to a camera transformation but more efficient because model transformations and camera transformations are concatenated to a single matrix. As a result though, certain operations must be performed when the camera and only the camera is on the MODELVIEW matrix. For example to position a light source in world space it most be positioned while the viewing transformation and only the viewing transformation is applied to the MODELVIEW matrix.

##### How can I move my eye, or camera, in my scene?

OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.

##### Where should my camera go, the ModelView or Projection matrix?

The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.

The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.

Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.

The game dev FAQ has good information on these two matrices.

##### How do I implement a zoom operation?

A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.

A better method is to restrict the width and height of the view volume in the Projection matrix.

For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:

```   static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */

/* A routine for setting the projection matrix. May be called from a resize
event handler in a typical application. Takes integer width and height
dimensions of the drawing area. Creates a projection matrix with correct
aspect ratio and zoom factor. */
void setProjectionMatrix (int width, int height)
{
glMatrixMode(GL_PROJECTION);
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */
}```

Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:

```   glFrustum(left*zoomFactor, right*zoomFactor,
bottom*zoomFactor, top*zoomFactor,
zNear, zFar);```

glOrtho() is similar.

##### Given the current ModelView matrix, how can I determine the object-space location of the camera?

The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.

OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.

##### How do I make the camera "orbit" around a point in my scene?

You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:

```   gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */
0, 0, 0, /* look at the origin */
0, 1, 0); /* positive Y up vector */
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */
/* ...where orbitDegrees is derived from mouse motion */

glCallList(SCENE); /* draw the scene */```

If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations.

In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).

##### How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.)

The following is from a posting by Dave Shreiner on setting up a basic viewing system:

First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").

Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set

```   zNear = 1.0;
zFar = zNear + diam;```

Structure your matrix calls in this order (for an Orthographic projection):

```   GLdouble left = c.x - diam;
GLdouble right = c.x + diam;
GLdouble bottom c.y - diam;
GLdouble top = c.y + diam;

glMatrixMode(GL_PROJECTION);
glOrtho(left, right, bottom, top, zNear, zFar);
glMatrixMode(GL_MODELVIEW);

This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():

```   GLdouble aspect = (GLdouble) windowWidth / windowHeight;

if ( aspect < 1.0 ) { // window taller than wide
bottom /= aspect;
top /= aspect;
} else {
left *= aspect;
right *= aspect;
}```

The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.

A typical viewing transform will go on the ModelView matrix and might look like this:

`   gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);`
##### Why doesn't gluLookAt work?

This is usually caused by incorrect transformations.

Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.

It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:

```   glMatrixMode(GL_PROJECTION);
gluPerspective(50.0, 1.0, 3.0, 7.0);
glMatrixMode(GL_MODELVIEW);
gluLookAt(0.0, 0.0, 5.0,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0);```

It's important to note how the Projection and ModelView transforms work together.

In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.

The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.

Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.

If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.

##### How do I get a specified point (XYZ) to appear at the center of the scene?

gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().

##### I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened?

Look at question 3 for an explanation of this problem.

##### How can I create a stereo view?

Stereo viewing is accomplished by presenting a different image to the left and right eyes of the viewer. These images must be appropriate for the viewers relationship to the display they are looking at, much moreso than a mono 3D image. In addition the method used is tied closely to the display technology being used. Some graphics systems and display devices support stereo viewing in hardware and support features like left and right framebuffers in addition to the front and back buffers of conventional double buffered systems. Other systems support stereo correctly when two viewports are created in specific screen regions and specific video mode is used to send these to the screen. In conjunction with these modes a viewer often wears glasses either shuttered or polarized to select the displayed image appropriate to each eye. However even without these graphics features a developer can generate stereo views using features like color filtering where colored filters select an image based on red or blue filters and draw left and right eye images to red and blue framebuffer components for example, or even more simply just have multiple systems or graphics cards (or even a single card) generate two entirely separate video signals, for which a separate left and right eye image is drawn. The video is then sent to the appropriate eye either using a display employing polarizing filters or a head mounted display or some other custom display operating on similar principals.

From an OpenGL perspective, the requirements of stereo rendering are to use the appropriate setup to render to left and right eyes (be it color masks, separate contexts or different viewports) and then match the geometry of the OpenGL projection to the relationship of the viewer's left and right eyes with the display. The final OpenGL requirement is that the position of the eyes in the 'virtual' world must be given a pupil separation on the modelview stack, this separation would of course be a translation in eye space, but could be calculated in other equivalent ways.

Paul Bourke has assembled information on stereo OpenGL viewing.