# Difference between revisions of "Vertex Transformation"

(→Step 4 : Getting to window space: So, is [746.1, 411.8, 0.990639] going ....) |
(→Step 1 : Getting to eye coordinates: Adding more details) |
||

Line 45: | Line 45: | ||

The vertex becomes [2.5, 3.6, -96.6, 1.0] | The vertex becomes [2.5, 3.6, -96.6, 1.0] | ||

+ | |||

+ | How did we get [2.5, 3.6, -96.6, 1.0]? In case you don't know anything about linear algebra, here is how we did the calculations. | ||

+ | |||

+ | This is the modelview matrix | ||

+ | [1.0, 0.00, 0.00, 1.00] | ||

+ | [0.00, 1.0, 0.00, 2.00] | ||

+ | [0.00, 0.00, 1.00, 3.00] | ||

+ | [0.00, 0.00, 0.00, 1.00] | ||

+ | |||

+ | Take the first row of the matrix and do a dot product with your vertex XYZW values to get the X value of your eye space vertex : | ||

+ | 1.0 * 1.5 + 0.0 * 1.6 + 0.0 * -99.6 + 1.0 * 1.0 = 2.5 | ||

+ | |||

+ | Take the second row of the matrix and do a dot product with your vertex XYZW values to get the Y value of your eye space vertex : | ||

+ | 0.0 * 1.5 + 1.0 * 1.6 + 0.0 * -99.6 + 2.0 * 1.0 = 3.6 | ||

+ | |||

+ | Take the third row of the matrix and do a dot product with your vertex XYZW values to get the Z value of your eye space vertex : | ||

+ | 0.0 * 1.5 + 0.0 * 1.6 + 1.0 * -99.6 + 3.0 * 1.0 = -96.6 | ||

+ | |||

+ | Take the fourth row of the matrix and do a dot product with your vertex XYZW values to get the W value of your eye space vertex : | ||

+ | 0.0 * 1.5 + 0.0 * 1.6 + 0.0 * -99.6 + 1.0 * 1.0 = 1.0 | ||

== Step 2 : Getting to clip coordinates == | == Step 2 : Getting to clip coordinates == |

## Revision as of 14:43, 22 July 2012

Warning: This article describes legacy OpenGL APIs that have been removed from core OpenGL 3.1 and above (they are only deprecated in OpenGL 3.0). It is recommended that you not use this functionality in your programs. |

This page contains a small example that shows how a vertex is transformed.

This page will demonstrate:

- object space ---> world space
- world space ---> eye space
- eye space ---> clip space
- clip space ---> normalized device space
- normalized device space ---> window space

So looking at the above, there are 5 steps. In GL, the modelview matrix is actually a 2 in 1 matrix. It is the camera matrix multiplied with the object's transform matrix. Therefore, there are actually 4 steps.

Let's built a projection matrix.

glLoadIdentity(); glFrustum(-0.1, 0.1, -0.1, 1.0, 1.0, 1000.0);

The resulting matrix looks like this

[1.81, 0.00, -0.81, 0.00] [0.00, 10.0, 0.00, 0.00] [0.00, 0.00, -1.002, -2.002] [0.00, 0.00, -1.00, 0.00]

Let's built a very simple modelview matrix

glLoadIdentity(); glTranslatef(1.0, 2.0, 3.0);

The resulting matrix looks like this

[1.0, 0.00, 0.00, 1.00] [0.00, 1.0, 0.00, 2.00] [0.00, 0.00, 1.00, 3.00] [0.00, 0.00, 0.00, 1.00]

and of course, the viewport also matters

glViewport(0, 0, 800, 600);

## Contents

## Step 1 : Getting to eye coordinates

The vertex you give to GL is considered to be in object space.

Let's assume the values are [1.5, 1.6, -99.6, 1.0] Notice that w=1.0. W is usually equal to 1.0 even if you don't submit it to GL. Anything that is a point will have W=1.0 such as point lights and the position of a spot light.

When you transform a vertex by the modelview matrix, the vertex is considered to be in eye space.

Note: The modelview matrix is actually 2 matrices in 1. The world matrix which transforms from object space to world space and the view matrix which transforms from world to eye space.

The vertex becomes [2.5, 3.6, -96.6, 1.0]

How did we get [2.5, 3.6, -96.6, 1.0]? In case you don't know anything about linear algebra, here is how we did the calculations.

This is the modelview matrix

[1.0, 0.00, 0.00, 1.00] [0.00, 1.0, 0.00, 2.00] [0.00, 0.00, 1.00, 3.00] [0.00, 0.00, 0.00, 1.00]

Take the first row of the matrix and do a dot product with your vertex XYZW values to get the X value of your eye space vertex :

1.0 * 1.5 + 0.0 * 1.6 + 0.0 * -99.6 + 1.0 * 1.0 = 2.5

Take the second row of the matrix and do a dot product with your vertex XYZW values to get the Y value of your eye space vertex :

0.0 * 1.5 + 1.0 * 1.6 + 0.0 * -99.6 + 2.0 * 1.0 = 3.6

Take the third row of the matrix and do a dot product with your vertex XYZW values to get the Z value of your eye space vertex :

0.0 * 1.5 + 0.0 * 1.6 + 1.0 * -99.6 + 3.0 * 1.0 = -96.6

Take the fourth row of the matrix and do a dot product with your vertex XYZW values to get the W value of your eye space vertex :

0.0 * 1.5 + 0.0 * 1.6 + 0.0 * -99.6 + 1.0 * 1.0 = 1.0

## Step 2 : Getting to clip coordinates

When you transform a vertex by the projection matrix, you get [83.58, 36.0, 94.7914, 96.6]. This is called clip coordinate.

## Step 3 : Getting to normalized device coordinates

Then w inverse is computed : 1/96.6 = 0.0103520

Each component is multiplied by the 1/w, you get [0.86522016, 0.372672, 0.9812785, 1.0] This is called normalized device coordinates.

Here, if z is from -1.0 to 1.0, then it is inside the znear and zfar clipping planes.

## Step 4 : Getting to window space

Now the final stage of the transformation pipeline:

The z is transformed to the 0.0 to 1.0 range. Anything outside this range gets clipped away. Notice that glDepthRange() has an effect here. By default, glDepthRange(0.0, 1.0)

The final operation looks like this

windowCoordinate[0] = (x * 0.5 + 0.5) * viewport[2] + viewport[0]; windowCoordinate[1] = (y * 0.5 + 0.5) * viewport[3] + viewport[1]; windowCoordinate[2] = (1.0 + z) * 0.5; //Convert to 0.0 to 1.0 range. Anything outside that ranges gets clipped.

and the vertex will now be XYZ = [746.1, 411.8, 0.990639]

W doesn't matter.

So, is [746.1, 411.8, 0.990639] going land within our viewport? Remember that our viewport is defined as glViewport(0, 0, 800, 600).

The x value is within 0 to 800, so that is good. The y value is within 0 to 600, so that is good. The z value is between 0.0 and 1.0, so that is good.

That means that the vertex falls inside the viewport.

## More Examples

So in the example above, the z ended up being 0.990639. Since it is between 0.0 and 1.0, this vertex will not get clipped.

What if the vertex is [1.5, 1.6, 5.0, 1.0]?

eye space vertex would be [2.5, 3.6, 8.0, 1.0]

clip coordinates would be [-2.0, 36.0, -10.018, -8.0]

1/w is -0.125

normalized device coordinates would be [0.25, -4.5, 1.25225, 1.0]

window space would be XYZ = [500.0, -1050.0, 1.12613]. W doesn't matter.

Since y is below 0.0, this vertex would get clipped. Since z is above 1.0, this vertex would get clipped.

Another example :

What if the vertex is [1.5, 1.6, -1010.0, 1.0]? Notice that the z value is above the zfar value supplied to glFrustum (ignoring the negative sign).

eye space vertex would be [2.5, 3.6, -1007.0, 1.0]

clip coordinates would be [828.455, 36.0, 1007.01, 1007.00]

1/w is 0.000993

normalized device coordinates would be [0.822655815, 0.035748, 0.99996093, 1.0]

window space would be XYZ = [729.078, 310.725, 1.00001]. W doesn't matter.

The x and y value are fine but the z value is above 1.0, so this vertex would get clipped.

## The Z-Buffer

So let's assume your hardware is about to write to a pixel and it will write to the z-buffer as well.

The z-buffer is typically a normalized integer format. The number of bits used depends on what you asked for when you selected a pixel format. It is typically 16 bit, 24 bit, or 32 bit integer format. It is possible to create a floating point z-buffer but we will just talk about the integer formats here.

Since z-values are from 0.0 to 1.0 in floating point format, they need to be remapped.

For a 16 bit z-buffer, 0.0 to 1.0 would get remapped to 0 to 2^16-1 (65535), which doesn't give much precision.

For a 24 bit z-buffer, 0.0 to 1.0 would get remapped to 0 to 2^24-1 (16777215). Most of the time, people create a D24S8 which means 24 bit for depth and 8 bit for stencil and this packs into a 32 bit really nicely. Just be sure to clear both for best performance : glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT)

For a 32 bit z-buffer, 0.0 to 1.0 would get remapped to 0 to 2^32-1 (4294967295). It is possible to show that a 32 bit z-buffer doesn't have a big advantage over the 24 bit z-buffer.

See also Depth Buffer Precision.