Vertex Texture Fetch

From OpenGL Wiki
Revision as of 20:51, 20 May 2008 by V-man (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The following article discusses Vertex Texture Fetch feature of todays GPUs.

Vertex Texture Fetch will be referred to as VTF from now on.
Texture image units will be referred to as TIU.

What version of GL supports VTF?
In order to be able to do VTF, you need shader support. GLSL has been made core since GL 2.0.
You also need a GPU that supports VTF.

GPUs that support VTF use the same TIUs as the fragment shader. This means that you can bind a texture to TIU 1 and sample that same texture in the vertex shader and also the fragment shader. To bind the texture, it is rather simple :

 glBindTexture(GL_TEXTURE_2D, textureID);

In order to know how many TIUs your vertex shader has access to, call

 int MaxVertexTextureImageUnits;
 glGetIntegerv(GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, &MaxVertexTextureImageUnits);
 int MaxCombinedTextureImageUnits;
 glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, &MaxCombinedTextureImageUnits);

and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is the TUI number accessible from your VS and FS combined.
If in your VS and FS, you access the same texture, that counts like accessing 2 TIUs.

An issue with accessing a texture is the texture format. Your GPU might support a wide range of formats that can be accessed by the TIU of the FS but the TIU of the VS simply doesn't support certain formats. For example, nVidia has published Vertex_Format.pdf when Gf6 was released
Gf 6 supports only GL_TEXTURE_2D of format GL_LUMINANCE32F and GL_RGBA32F. It doesn't support any of the other floating point formats or fixed point formats. There is no floating point compressed format.
Example code :

 uint vertex_texture;
 glGenTextures(1, &vertex_texture);
 glBindTexture(GL_TEXTURE_2D, vertex_texture);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F, width, height, 0, GL_LUMINANCE, GL_FLOAT, data);

If you setup some unsupported filter mode or wrap mode, or some unsupported texture format, it will fall to software vertex processing.

Another thing to remember is that the GPU doesn't know which mipmap to use. It has no way to compute the lambda factor.
You need to choose the mipmap in the VS yourself.
Here is an example of a VS.
Notice how the example uses texture2DLod() and it chooses mipmap 0.

 uniform sampler2D Texture0;
 uniform mat4 ProjectionModelviewMatrix;
 void main()
     vec4 texel, newVertex;
     //Read the texture offset. Offset in the z direction only
     texel = texture2DLod(Texture0, gl_TexCoord[0].xy, 0.0);
     newVertex = gl_Vertex;
     newVertex.z += texel.x;
     gl_Position = ProjectionModelviewMatrix * newVertex;
     gl_TexCoord[0].xy = gl_MultiTexCoord0.xy;