Common Mistakes
This is a list of common mistakes made by OpenGL programmers.
Texture Upload
You create a texture and upload the pixels with glTexImage2D (or glTexImage1D, glTexImage3D) but there seems to be diagonal lines going through the image or your program crashes. This is because the scanline of your pixel array is not multiple of 4. The scanline is width * bytes. By default, glPixelStorei(GL_UNPACK_ALIGNMENT, 4) and you can change it to glPixelStorei(GL_UNPACK_ALIGNMENT, 1) if you scanline is not multiple of 4.
Texture Precision
You call glTexImage2D(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels) and you set X to 1, 2, 3, 4. You shouldn't set X to a number because although your code will work, it's not really an internal format. You should set it to GL_RGBA8 or some other "internal precision" format. The GL documents have a table of valid values.
Creating a Texture
What's wrong with this code?
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
The texture won't work because it is incomplete. The default GL_TEXTURE_MIN_FILTER state is GL_LINEAR_MIPMAP_NEAREST so GL will consider the texture incomplete as long as you don't create the mipmaps.
This is better because it sets up some of the important texture object states :
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
If you want mipmaps :
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAPS, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Render To Texture
If you want to render to texture via the GL_EXT_framebuffer_object extension, quite a few people make the same mistake as explained above for the case of "Creating a Texture". They leave glTexParameteri in the default states yet they don't define mipmaps. If you want mipmaps, in this case, once the texture is created (glTexImage2D(....., NULL)), then call glGenerateMipmapsEXT(GL_TEXTURE_2D) or glGenerateMipmapsEXT(GL_TEXTURE_3D) or glGenerateMipmapsEXT(GL_TEXTURE_CUBE_MAP).
If you don't, then you get GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT_EXT.
Depth Testing Doesn't Work
You probably did not ask for a depth buffer. If you are using GLUT, glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) GLUT_DEPTH asks for a depth buffer. Be sure to enable the depth testing with glEnable(GL_DEPTH_TEST) and call glDepthFunc(GL_LEQUAL).
No Alpha in the Framebuffer
Be sure you create a double buffered context and make sure you ask for a alpha component. With GLUT, you can call glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) in which GL_RGBA asks for a alpha component.
glFinish and glFlush
Use glFlush if you are rendering directly to your window. It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.
There is a lot of tutorial website that show this
glFlush(); SwapBuffers();
Never call glFlush before calling SwapBuffers. The SwapBuffer command takes care of flushing and command processing.
What does glFlush do? It tells the driver to send all pending commands to the GPU immediatly. This can actually reduce performance.
What does glFinish do? It tells the driver to send all pending commands to the GPU immediatly and waits until all the commands are processed by the GPU. This can take a lot of time.
A modern OpenGL program should NEVER use glFlush or/and glFinish.
Certain benchmark software might use glFinish.
glDrawPixels
For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support GL_BGRA. Example :
glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
However, it is recommened that you use a texture instead and just update the texture with glTexSubImage2D.
glEnableClientState(GL_INDEX_ARRAY)
What's wrong with this code?
glBindBuffer(GL_ARRAY_BUFFER, VBOID); glVertexPointer(3, GL_FLOAT, sizeof(vertex_format), 0); glTexCoordPointer(2, GL_FLOAT, sizeof(vertex_format), 12); glNormalPointer(GL_FLOAT, sizeof(vertex_format), 20); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_INDEX_ARRAY); glBindBuffer(GL_ELEMENT_ARRAY, IBOID); glDrawRangeElements(....);
The problem is that GL_INDEX_ARRAY is not understood by the programmer.
GL_INDEX_ARRAY has nothing to do with indices for your glDrawRangeElements.
This is for color index arrays. A modern OpenGL program should not used color index arrays. Do not use glIndexPointer. If you need colors, use the color array. This array should be filled be RGBA data.
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertex_format), X); glEnableClientState(GL_COLOR_ARRAY);
glInterleavedArrays
This function should not be used by modern GL programs. If you want to have interleaved arrays, use the corresponding gl****Pointer calls.
Example :
struct MyVertex { float x, y, z; //Vertex float nx, ny, nz; //Normal float s0, t0; //Texcoord0 float s1, s2; //Texcoord1 }; //----------------- glVertexPointer(3, GL_FLOAT, sizeof(MyVertex), offset); glNormalPointer(GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*3); glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*6); glClientActiveTexture(GL_TEXTURE1); glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*8);
Unsupported formats #1
glLoadMatrixd, glRotated and any other function that have to do with the double type. Most GPUs don't support GL_DOUBLE (double) so the driver will convert the data to GL_FLOAT (float) and send to the GPU. If you put GL_DOUBLE data in a VBO, the performance might even be much worst than immediate mode (immediate mode means glBegin, glVertex, glEnd). GL doesn't offer any better way to know what the GPU prefers.
Unsupported formats #2
glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(vertex_format), X);
The problem is that most GPUs can't handle 3 bytes. They prefer multiple of 4. You should add the alpha.
The same can be said for glColor3ub and the other "3" component color functions. It's possible that "3" component float is ok for your GPU.
You need to consult the IHV's documents or perhaps do benchmarking on your own because GL doesn't offer any better way to know what the GPU prefers.
Unsupported formats #3
glTexImage2D(GL_TEXTURE2D, 0, GL_RGB8, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, pixels);
Although plenty of image formats like bmp, png, jpg are by default saved as 24 bit and this can save disk space, this is not what the GPU prefers. GPUs prefer multiple of 4 bytes. The driver will convert your data to GL_RGBA8 and it will set alpha to 255. GL doesn't offer any better way to know what the GPU prefers.