# Geometry Shader Examples

Warning: This article describes legacy OpenGL APIs that have been removed from core OpenGL 3.1 and above (they are only deprecated in OpenGL 3.0). It is recommended that you not use this functionality in your programs. |

GS = Geometry Shader

This is a short document about Geometry Shaders in OpenGL.

The first extension to be introduced was GL_EXT_geometry_shader4 on Oct 1, 2007.

Then it became ARB approved GL_ARB_geometry_shader4 on July 8, 2008.

GL_EXT_geometry_shader4 is available on nVidia Gf 8 series and up. Gf 8 is a SM 4.0 GPU

With ATI/AMD, all the Radeons with a HD in their name are SM 4.0 GPUs but their drivers did not support GS.

On June 10, 2009, Catalyst 9.6 gets released and supports various new extensions including GS

http://www.geeks3d.com/20090610/ati-catalyst-96-beta-a-stack-of-new-opengl-extensions

Fact #1 : Geometry Shaders are now core functionality.

Fact #2 : GL_EXT_geometry_shader4 and GL_ARB_geometry_shader4 both seem like identical extensions.

Here is the specification

http://www.opengl.org/registry/specs/ARB/geometry_shader4.txt

## Introduction

The GS unit comes AFTER the VS unit. In the VS unit, you simply transform vertices, normals, texcoords and whatever else is in the input stream (such as tangent vectors, color). The GS unit has some connectivity information (what makes a triangle, what is adjacent to this triangle). The GS unit can be used for generating new geometry. For example, if input is a triangle, you can put out a few triangles. If input is a line, you can output a few lines. If input is a point, you can output some points. Once you emit the new extra geometry, it goes down to the rest of the fixed function vertex and primitive processor and reaches the fragment stage for rasterization. The order in which it reaches rasterization is not necessarily predictable, which is a problem if you are using order dependent blending such as glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).

In the GS, you can sample a texture, you can transform a vertex with another matrix, with the projection matrix, with modelview matrix ..... you can do whatever you want just like in the other shader units.

## Short Example

The language used here is GLSL.

The language used in the nVidia demo (on nv website!), is the low level GL_NV_geometry_program4.

Since GL_ARB_geometry_shader4 is an extension, you need to put

extension GL_ARB_geometry_shader4 : enable

```
// Geometry Shader
#version 120
#extension GL_ARB_geometry_shader4 : enable
// --------------------
void main()
{
// increment variable
int i;
vec4 vertex;
// --------------------
// This example has two parts
// step a) draw the primitive pushed down the pipeline
// there are gl_VerticesIn # of vertices
// put the vertex value into gl_Position
// use EmitVertex => 'create' a new vertex
// use EndPrimitive to signal that you are done creating a primitive!
// step b) create a new piece of geometry
// I just do the same loop, but I negate the vertex.z
// result => the primitive is now mirrored.
// Pass-thru!
for(i = 0; i < gl_VerticesIn; i++)
{
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
// New piece of geometry!
for(i = 0; i < gl_VerticesIn; i++)
{
vertex = gl_PositionIn[i];
vertex.z = -vertex.z;
gl_Position = vertex;
EmitVertex();
}
EndPrimitive();
}
```

The first for loop just emits the incoming primitive unchanged.

For every vertex, we call EmitVertex();

For every primitive, we call EndPrimitive();

Those two are special instructions for the GS unit.

We are then generating a new primitive that is mirrored around the xy-plane.

The example doesn't show it but if you will be using custom varyings, you have to name them something like **varying vec2 GeomTexCoord0** and **varying vec2 FragTexCoord0**

In the vertex shader, since the name must match, you should call it **varying vec2 GeomTexCoord0**

In the fragment shader, **varying vec2 FragTexCoord0**

## Short Example 2

```
// Vertex Shader
#version 110
// Uniform
uniform mat4 ProjectionModelviewMatrix;
uniform vec4 TexMatrix0_a;
uniform vec4 TexMatrix0_b;
uniform vec4 LightPosition0;
uniform mat4 ModelviewMatrix;
// Varying
varying vec2 VTexCoord0;
varying vec3 VHalfVector0;
varying vec3 VEyeNormal;
varying vec3 VEyeVertex;
void main()
{
vec3 eyeVertex;
vec3 lightVector, eyeVector;
gl_Position = ProjectionModelviewMatrix * gl_Vertex;
VTexCoord0.x = dot(TexMatrix0_a, gl_MultiTexCoord0);
VTexCoord0.y = dot(TexMatrix0_b, gl_MultiTexCoord0);
eyeVertex = vec3(ModelviewMatrix * gl_Vertex);
VEyeVertex = eyeVertex;
eyeVector = normalize(-eyeVertex);
lightVector = LightPosition0.xyz;
VHalfVector0 = lightVector + eyeVector; //No need to normalize the sum
VEyeNormal = vec3(ModelviewMatrix * vec4(gl_Normal, 0.0));
}
```

```
// Geometry Shader
#version 120
#extension GL_ARB_geometry_shader4 : enable
// Uniform
uniform mat4 ProjectionMatrix;
// Varying
varying in vec2 VTexCoord0[3]; // [3] because this makes a triangle
varying in vec3 VHalfVector0[3];
varying in vec3 VEyeNormal[3];
varying in vec3 VEyeVertex[3];
varying out vec2 TexCoord0;
varying out vec3 HalfVector0;
varying out vec3 EyeNormal;
void main()
{
int i;
vec3 newVertex;
// Pass through the original vertex
for(i=0; i<gl_VerticesIn; i++)
{
gl_Position = gl_PositionIn[i];
TexCoord0 = VTexCoord0[i];
HalfVector0 = VHalfVector0[i];
EyeNormal = VEyeNormal[i];
EmitVertex();
}
EndPrimitive();
// Push the vertex out a little using the normal
for(i=0; i<gl_VerticesIn; i++)
{
newVertex = VEyeNormal[i] + VEyeVertex[i];
gl_Position = ProjectionMatrix * vec4(newVertex, 1.0);
TexCoord0 = VTexCoord0[i];
HalfVector0 = VHalfVector0[i];
EyeNormal = VEyeNormal[i];
EmitVertex();
}
EndPrimitive();
}
```

```
// Fragment Shader
#version 110
// Uniform
uniform sampler2D Texture0;
uniform vec4 LightPosition0;
uniform vec4 AllLightAmbient_MaterialAmbient;
uniform vec4 LightMaterialDiffuse0;
uniform vec4 LightMaterialSpecular0;
uniform float MaterialShininess;
// Varying
varying vec2 TexCoord0;
varying vec3 HalfVector0;
varying vec3 EyeNormal;
// eyeNormal must be normalized already
// lightVector must be normalized already. xyz is lightvector and w is light distance from vertex
// halfVector must be normalized already
//
// output diffuse color and output specular color
// Then do diffuse * texture_color + specular
// diffuse.a = material_diffuse.a
void ComputeDirectionalLight(out vec4 diffuseColor, out vec4 specularColor, in vec3 eyeNormal, in vec3 lightVector, in vec3 halfVector, in vec4 lightMaterialDiffuse, in vec4 lightMaterialSpecular)
{
float dotProduct;
dotProduct = clamp(dot(eyeNormal, lightVector), 0.0, 1.0);
diffuseColor = dotProduct * lightMaterialDiffuse;
specularColor = vec4(0.0);
dotProduct = clamp(dot(eyeNormal, halfVector), 0.0, 1.0);
if(dotProduct>0.0)
specularColor = pow(dotProduct, MaterialShininess) * lightMaterialSpecular;
}
// --------------------
void main()
{
vec4 texel, diffuseColor, specularColor;
vec4 ColorSum;
vec3 eyeNormal, halfVector;
texel = texture2D(Texture0, TexCoord0);
eyeNormal = normalize(EyeNormal);
halfVector = normalize(HalfVector0);
ComputeDirectionalLight(diffuseColor, specularColor, eyeNormal, LightPosition0.xyz, halfVector, LightMaterialDiffuse0, LightMaterialSpecular0);
ColorSum = (AllLightAmbient_MaterialAmbient + diffuseColor) * texel + specularColor;
ColorSum.a = texel.a * LightMaterialDiffuse0.a;
gl_FragColor = clamp(ColorSum, 0.0, 1.0);
}
```

## Additional Info

The GS unit in current generation of GPUs, such as the Geforce 8000/9000/GTX 100, 200 and 300 series, is considered too slow to be practical by some people. It is also considered too limited by some because there is a limit to how many new primitives you can emit.

The next generation of GPUs will be more flexible and will support other shader units, such as the tessellation control and evaluation units.