Difference between revisions of "Common Mistakes"

From OpenGL Wiki
Jump to navigation Jump to search
 
Line 1: Line 1:
 
This is a list of common mistakes made by OpenGL programmers.
 
This is a list of common mistakes made by OpenGL programmers.
  
=== 1. Texture Upload ===
+
=== Texture Upload ===
 
You create a texture and upload the pixels with glTexImage2D (or glTexImage1D, glTexImage3D) but there seems to be diagonal lines going through the image or your program crashes. This is because the scanline of your pixel array is not multiple of 4. The scanline is width * bytes. By default, glPixelStorei(GL_UNPACK_ALIGNMENT, 4) and you can change it to glPixelStorei(GL_UNPACK_ALIGNMENT, 1) if you scanline is not multiple of 4.
 
You create a texture and upload the pixels with glTexImage2D (or glTexImage1D, glTexImage3D) but there seems to be diagonal lines going through the image or your program crashes. This is because the scanline of your pixel array is not multiple of 4. The scanline is width * bytes. By default, glPixelStorei(GL_UNPACK_ALIGNMENT, 4) and you can change it to glPixelStorei(GL_UNPACK_ALIGNMENT, 1) if you scanline is not multiple of 4.
  
=== 2. Texture Precision ===
+
=== Texture Precision ===
 
You call glTexImage2D(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels) and you set X to 1, 2, 3, 4. You shouldn't set X to a number because although your code will work, it's not really an internal format. You should set it to GL_RGBA8 or some other "internal precision" format. The GL documents have a table of valid values.
 
You call glTexImage2D(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels) and you set X to 1, 2, 3, 4. You shouldn't set X to a number because although your code will work, it's not really an internal format. You should set it to GL_RGBA8 or some other "internal precision" format. The GL documents have a table of valid values.
  
=== 3. Depth Testing Doesn't Work ===
+
=== Depth Testing Doesn't Work ===
 
You probably did not ask for a depth buffer. If you are using GLUT, glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL)
 
You probably did not ask for a depth buffer. If you are using GLUT, glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL)
 
GLUT_DEPTH asks for a depth buffer. Be sure to enable the depth testing with glEnable(GL_DEPTH_TEST) and call glDepthFunc(GL_LEQUAL).
 
GLUT_DEPTH asks for a depth buffer. Be sure to enable the depth testing with glEnable(GL_DEPTH_TEST) and call glDepthFunc(GL_LEQUAL).
  
=== 4. No Alpha in the Framebuffer ===
+
=== No Alpha in the Framebuffer ===
 
Be sure you create a double buffered context and make sure you ask for a alpha component. With GLUT, you can call glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) in which GL_RGBA asks for a alpha component.
 
Be sure you create a double buffered context and make sure you ask for a alpha component. With GLUT, you can call glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) in which GL_RGBA asks for a alpha component.
 +
 +
=== glFinish and glFlush ===
 +
Use glFlush if you are rendering directly to your window. It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.<br>
 +
 +
There is a lot of tutorial website that show this
 +
  glFlush();
 +
  SwapBuffers();
 +
 +
Never call glFlush before calling SwapBuffers. The SwapBuffer command takes care of flushing and command processing.<br>
 +
<br>
 +
What does glFlush do? It tells the driver to send all pending commands to the GPU immediatly. This can actually reduce performance.<br>
 +
What does glFinish do? It tells the driver to send all pending commands to the GPU immediatly and waits until all the commands are processed by the GPU. This can take a lot of time.<br>
 +
A modern OpenGL program should NEVER use glFlush or/and glFinish.<br>
 +
Certain benchmark software might use glFinish.
 +
 +
=== glDrawPixels ===
 +
For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support GL_BGRA. Example :
 +
  glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
 +
However, it is recommened that you use a texture instead and just update the texture with glTexSubImage2D.
 +
 +
=== glEnableClientState(GL_INDEX_ARRAY) ===
 +
What's wrong with this code?
 +
  glBindBuffer(GL_ARRAY_BUFFER, VBOID);
 +
  glVertexPointer(3, GL_FLOAT, sizeof(vertex_format), 0);
 +
  glTexCoordPointer(2, GL_FLOAT, sizeof(vertex_format), 12);
 +
  glNormalPointer(GL_FLOAT, sizeof(vertex_format), 20);
 +
  glEnableClientState(GL_VERTEX_ARRAY);
 +
  glEnableClientState(GL_TEX_COORD_ARRAY);
 +
  glEnableClientState(GL_NORMAL_ARRAY);
 +
  glEnableClientState(GL_INDEX_ARRAY);
 +
  glBindBuffer(GL_ELEMENT_ARRAY, IBOID);
 +
  glDrawRangeElements(....);
 +
 +
The problem is that GL_INDEX_ARRAY is not understood by the programmer.<br>
 +
GL_INDEX_ARRAY has nothing to do with indices for your glDrawRangeElements.<br>
 +
This is for color index arrays. A modern OpenGL program should not used color index arrays. Do not use glIndexPointer. If you need colors, use the color array. This array should be filled be RGBA data.
 +
  glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertex_format), X);
 +
  glEnableClientState(GL_COLOR_ARRAY);

Revision as of 15:35, 4 July 2008

This is a list of common mistakes made by OpenGL programmers.

Texture Upload

You create a texture and upload the pixels with glTexImage2D (or glTexImage1D, glTexImage3D) but there seems to be diagonal lines going through the image or your program crashes. This is because the scanline of your pixel array is not multiple of 4. The scanline is width * bytes. By default, glPixelStorei(GL_UNPACK_ALIGNMENT, 4) and you can change it to glPixelStorei(GL_UNPACK_ALIGNMENT, 1) if you scanline is not multiple of 4.

Texture Precision

You call glTexImage2D(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels) and you set X to 1, 2, 3, 4. You shouldn't set X to a number because although your code will work, it's not really an internal format. You should set it to GL_RGBA8 or some other "internal precision" format. The GL documents have a table of valid values.

Depth Testing Doesn't Work

You probably did not ask for a depth buffer. If you are using GLUT, glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) GLUT_DEPTH asks for a depth buffer. Be sure to enable the depth testing with glEnable(GL_DEPTH_TEST) and call glDepthFunc(GL_LEQUAL).

No Alpha in the Framebuffer

Be sure you create a double buffered context and make sure you ask for a alpha component. With GLUT, you can call glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) in which GL_RGBA asks for a alpha component.

glFinish and glFlush

Use glFlush if you are rendering directly to your window. It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.

There is a lot of tutorial website that show this

 glFlush();
 SwapBuffers();

Never call glFlush before calling SwapBuffers. The SwapBuffer command takes care of flushing and command processing.

What does glFlush do? It tells the driver to send all pending commands to the GPU immediatly. This can actually reduce performance.
What does glFinish do? It tells the driver to send all pending commands to the GPU immediatly and waits until all the commands are processed by the GPU. This can take a lot of time.
A modern OpenGL program should NEVER use glFlush or/and glFinish.
Certain benchmark software might use glFinish.

glDrawPixels

For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support GL_BGRA. Example :

  glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

However, it is recommened that you use a texture instead and just update the texture with glTexSubImage2D.

glEnableClientState(GL_INDEX_ARRAY)

What's wrong with this code?

  glBindBuffer(GL_ARRAY_BUFFER, VBOID);
  glVertexPointer(3, GL_FLOAT, sizeof(vertex_format), 0);
  glTexCoordPointer(2, GL_FLOAT, sizeof(vertex_format), 12);
  glNormalPointer(GL_FLOAT, sizeof(vertex_format), 20);
  glEnableClientState(GL_VERTEX_ARRAY);
  glEnableClientState(GL_TEX_COORD_ARRAY);
  glEnableClientState(GL_NORMAL_ARRAY);
  glEnableClientState(GL_INDEX_ARRAY);
  glBindBuffer(GL_ELEMENT_ARRAY, IBOID);
  glDrawRangeElements(....);

The problem is that GL_INDEX_ARRAY is not understood by the programmer.
GL_INDEX_ARRAY has nothing to do with indices for your glDrawRangeElements.
This is for color index arrays. A modern OpenGL program should not used color index arrays. Do not use glIndexPointer. If you need colors, use the color array. This array should be filled be RGBA data.

  glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertex_format), X);
  glEnableClientState(GL_COLOR_ARRAY);