Common Mistakes
Quite a few websites show the same mistakes and the mistakes presented in their tutorials are copied and pasted by those who want to learn OpenGL.
This page has been created so that newcomers understand GL programming a little better instead of working by trial and error.
There are also other articles explaining common mistakes:
- Common Mistakes in GLSL
- Unexpected Results you can get when using OpenGL
- Mistakes related to measuring Performance
- Common Mistakes when using deprecated functionality.
Extensions and OpenGL Versions
One of the possible mistakes related to this is to check for the presence of an extension, but instead using the corresponding core functions. The correct behavior is to check for the presence of the extension if you want to use the extension API, and check the GL version if you want to use the core API. In case of a core extension, you should check for both the version and the presence of the extension; if either is there, you can use the functionality.
The Object Oriented Language Problem
In an object-oriented language like C++, it is often useful to have a class that wraps an OpenGL object. For example, one might have a texture object that has a constructor and a destructor like the following:
MyTexture::MyTexture(const char *pfilePath)
{
if(LoadFile(pfilePath)==ERROR)
return;
textureID=0;
glGenTextures(1, &textureID);
//More GL code...
}
MyTexture::~MyTexture()
{
if(textureID)
glDeleteTextures(1, &textureID);
}
There is a large pitfall with doing this. OpenGL functions do not work unless an OpenGL context has been created and is active within that thread. Thus, glGenTextures will do nothing before context creation, and glDeleteTextures will do nothing after context destruction. The latter problem is not a significant concern since OpenGL contexts clean up after themselves, but the former is a problem.
This problem usually manifests itself when someone creates a texture object at global scope. There are several potential solutions:
- Do not use constructors/destructors to initialize/destroy OpenGL objects. Instead, use member functions of these classes for these purposes. This violates RAII principles, so this is not the best course of action.
- Have your OpenGL object constructors throw an exception if a context has not been created yet. This requires an addition to your context creation functionality that tells your code when a context has been created and is active.
- Create a class that owns all other OpenGL related objects. This class should also be responsible for creating the context in its constructor.
OOP and performance
There's another issue when using OpenGL with languages like c++. Consider the following function:
void MyTexture::TexParameter(GLenum pname, GLint param)
{
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, pname, param);
}
The problem is that the binding of the texture is hidden from the user of the class. Doing a lot of binding operations is expensive, even on modern computers. Since the operation is hidden, the user (which might be the same person who created the class) might not pay attention to it.
Texture upload and pixel reads
You create a texture and upload the pixels with glTexImage2D (or glTexImage1D, glTexImage3D). However, the program crashes on upload, or there seems to be diagonal lines going through the resulting image. This is because the alignment of each horizontal line of your pixel array is not multiple of 4. That is, each line of your pixel data is not a multiple of 4. This typically happens to users loading an image that is of the RGB or BGR format (in other words, 24 bpp image).
Example, your image width = 401 and height = 500. The height doesn't matter. What matters is the width. If we do the math, 401 pixels x 3 bytes = 1203. Is 1203 divisible by 4? In this case, the image's data alignment is not 4. The question now is, is 1203 divisible by 1? Yes, so the alignment is 1 so you should call glPixelStorei(GL_UNPACK_ALIGNMENT, 1). The default is glPixelStorei(GL_UNPACK_ALIGNMENT, 4). Unpacking means sending data from client side (the client is you) to OpenGL.
And if you are interested, most GPUs like chunks of 4 bytes. In other words, RGBA or BGRA is prefered. RGB and BGR is considered bizarre since most GPUs, most CPUs and any other kind of chip don't handle 24 bits. This means, the driver converts your RGB or BGR to what the GPU prefers, which typically is BGRA.
Similarly, if you read a buffer with glReadPixels, you might get similar problems. There is a GL_PACK_ALIGNMENT just like the GL_UNPACK_ALIGNMENT. The default GL_PACK_ALIGNMENT is 4 which means each horizontal line must be a multiple of 4 in size. If you read the buffer with a format such as BGRA or RGBA you won't have any problems since the line is already a multiple of 4. If you read it in a format such as BGR or RGB then you risk running into this problem.
The GL_PACK_ALIGNMENT can only be 1, 2, 4, or 8. So an alignment of 3 is not allowed. You could just change the GL_PACK_ALIGNMENT to 1. Or you can pad your buffer out so that each line, even with only 3 values per pixel, is a multiple of 4.
Image precision
You can (but it is not advisable to do so) call glTexImage2D(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels) and you set X to 1, 2, 3, 4. The X refers to the number of components (GL_RED would be 1, GL_RG would be 2, GL_RGB would be 3, GL_RGBA would be 4, etc).
It is preferred to actually give a real image format, preferably one with a specific internal precision.
If the OpenGL implementation does not support the particular format and precision you choose, the driver will internally convert it into something it does support. If you want to find the actual image format used, call this:
int value;
glBindTexture(GL_TEXTURE_2D, mytextureID);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPONENTS, &value);
OpenGL versions 3.x and above have a set of required image formats that all conformant implementations must implement.
We should also state that it is common to see the following on tutorial websites :
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
Although GL will accept GL_RGB, it is up to the driver to decide an appropriate precision. We recommend that you be specific and write GL_RGB8:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
which means you want the driver to actually store it in the R8G8B8 format. We should also state that most GPUs do not support 24 bpp formats like GL_RGB8 and they will up-convert to GL_RGBA8 and they will set alpha to 255. We should also state that on some platforms, such as Windows, BGRA is preferred.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
as you can see, you must type GL_RGBA8 for the internal format. GL_BGRA and GL_UNSIGNED_BYTE is for the data in pixels array. The driver will probably send this directly to the video card. Benchmarking shows that on Windows and with nVidia and ATI/AMD, that this is the optimal format.
Depth Buffer Precision
When you select a pixelformat for your window, and if you ask for a depth buffer, the depth buffer is typically stored as 16 bit integer or 24 bit integer or 32 bit integer.
It seems to be a common misconception that the depth buffer is stored as floating point but this is false. The depth values that fall into the clip region are typically from 0.0 to 1.0. The depth value is converted to an integer and finally written to the depth buffer by the GPU.
The conversion probably goes like this
if(depth_buffer_precision == 16)
{
writeToDepthBuffer = depthAsFloat * 65535;
}
else if(depth_buffer_precision == 24)
{
writeToDepthBuffer = depthAsFloat * 16777215;
}
else if(depth_buffer_precision == 32)
{
writeToDepthBuffer = depthAsFloat * 4294967295;
}
Furthermore, the case of 24 bit is a undesirable format. GPUs prefer chunks of 32 bit so the depth buffer will be padded with 8 bits that go unused. This is called the D24X8 format.
If you ask for a stencil buffer, you should ask for 8 bit so that the GPU would allocate what is called a D24S8 buffer which is 24 bit integer for the depth and 8 bit integer for stencil.
Now that the misconception about depth buffers being floating point is resolved, what is wrong with this call?
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, mypixels);
The GL driver will copy the depth buffer from the graphics card and it will use the CPU to convert it to floating point values. This is better :
if(depth_buffer_precision == 16)
{
GLushort mypixels[width*height];
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, mypixels);
}
else if(depth_buffer_precision == 24)
{
GLuint mypixels[width*height]; //There is no 24 bit variable, so we'll have to settle for 32 bit
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, mypixels); //GL will upconvert to 32 bit
}
else if(depth_buffer_precision == 32)
{
GLuint mypixels[width*height];
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, mypixels);
}
What if the depth buffer format is D24X8 or D24S8? The driver will have to do some work to ignore the 8 bit. This is one reason why an extension was created called GL_EXT_packed_depth_stencil which you can review at http://www.opengl.org/registry
The extensions became core in GL 3.0, so if you are targeting GL 3.0 you can drop the _EXT.
A call to glReadPixels would look like this :
GLuint mypixels[width*height];
glReadPixels(0, 0, width, height, GL_DEPTH_STENCIL_EXT, GL_UNSIGNED_INT_24_8_EXT, mypixels);
glReadPixels(0, 0, width, height, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, mypixels); //GL 3.0
As for depth textures, the 3rd parameter for glTexImage2D or glTexImage3D controls the internal precision. You should set it to GL_DEPTH_COMPONENT16 or GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT32.
With the GL_EXT_packed_depth_stencil extension you can even create a D24S8 texture.
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8_EXT, width, height, 0, GL_DEPTH_STENCIL_EXT, GL_UNSIGNED_INT_24_8_EXT, mypixels); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, mypixels); //GL 3.0
If you are creating a RTT (render to texture), then you can certainly render to that depth texture and also do stencil operations. You can certainly read back the results with glReadPixels or glGetTexImage.
//While the FBO is bound, let's call glReadPixels
glReadPixels(0, 0, width, height, GL_DEPTH_STENCIL_EXT, GL_UNSIGNED_INT_24_8_EXT, mypixels);
glReadPixels(0, 0, width, height, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, mypixels); //GL 3.0
or
//While the FBO is unbound, let's call glGetTexImage
glBindTexture(GL_TEXTURE_2D, mytextureID);
glGetTexImage(GL_TEXTURE_2D, 0, GL_DEPTH_STENCIL_EXT, GL_UNSIGNED_INT_24_8_EXT, mypixels);
glGetTexImage(GL_TEXTURE_2D, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, mypixels); //GL 3.0
Creating a Texture
What's wrong with this code?
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
The texture won't work because it is incomplete. The default GL_TEXTURE_MIN_FILTER state is GL_NEAREST_MIPMAP_LINEAR so GL will consider the texture incomplete as long as you don't create the mipmaps.
This code is better because it sets up some of the important texture object states:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
If you want to use mipmaps with this texture, you should use either generate them automatically (see below) or upload the mipmap levels individually.
Automatic mipmap generation
Mipmaps of a texture can be automatically generated with the glGenerateMipmap function. OpenGL 3.0 or greater is required for this function (or the extension GL_ARB_framebuffer_object). The function works quite simply; when you call it for a texture, mipmaps are generated for that texture:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
glGenerateMipmap(GL_TEXTURE_2D); //Generate mipmaps now!!!
Legacy Generation
OpenGL 1.4 is required for support for automatic mipmap generation. GL_GENERATE_MIPMAP is part of the texture object state and it is a flag (GL_TRUE or GL_FALSE). If it is set to GL_TRUE, then whenever texture level 0 is updated, the mipmaps will all be regenerated.
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
In GL 3.0, GL_GENERATE_MIPMAP is deprecated, and in 3.1 and above, it was removed. So for those versions, you must use glGenerateMipmap.
gluBuild2DMipmaps
Never use this. Use either GL_GENERATE_MIPMAP (requires GL 1.4) or the glGenerateMipmap function (requires GL 3.0).
glGetError
Why should you check for errors? Why you should call glGetError()?
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); //Requires GL 1.4. Deprecated in GL 3.0 and above. glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
The code doesn't call glGetError(). If you were to call glGetError(), it would return GL_INVALID_ENUM. If you were to place glGetError() after each function call, you will notice that the error is raised at glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR).
Checking For Errors When You Compile Your Shader
Why should you check for errors when you compile and link your shaders? What is the point of checking for errors? We'll let you decide for yourself. Here is a piece of code that doesn't check for errors.
vertexProgram = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexProgram, 1, &vertexSource, NULL); glCompileShader(vertexProgram); fragmentProgram = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentProgram, 1, &fragmentSource, NULL); glCompileShader(fragmentProgram); program = glCreateProgram(); glAttachShader(program, vertexProgram); glAttachShader(program, fragmentProgram); glLinkProgram(program);
Creating a Cubemap Texture
It's best to set the wrap mode to GL_CLAMP_TO_EDGE and not the other formats. Don't forget to define all 6 faces else the texture is considered incomplete. Don't forget to setup GL_TEXTURE_WRAP_R because cubemaps require 3D texture coordinates.
Example:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_CUBE_MAP, textureID);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//Define all 6 faces
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face0);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face1);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face2);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face3);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face4);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face5);
If you want auto-generated mipmaps, you can use any of the aforementioned mechanisms. OpenGL will not blend over multiple textures when generating mipmaps for the cubemap leaving visible seams at lower mip levels.
Texture edge color problem
If you want to clamp your texture fetches, use GL_CLAMP_TO_EDGE, not GL_CLAMP. GL_CLAMP_TO_EDGE means that the colors outside of the texture range are the color of the nearest texel in the texture. Whereas GL_CLAMP means that the colors outside of the texture range are the border color. This is usually not what you want, and can lead to black borders around your texture (since the border color is black).
Updating A Texture
In case you don't want to use Render_To_Texture, you will be just refreshing the texels either from main memory or from the framebuffer.
To change texels in an already existing 2d texture, use glTexSubImage2D:
glBindTexture(GL_TEXTURE_2D, textureID); //A texture you have already created with glTexImage2D
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Notice that glTexSubImage2D was used and not glTexImage2D. glTexImage2D respecifies the entire texture, changing its size, deleting the previous data, and so forth. glTexSubImage2D only modifies pixel data within the texture.
glTexSubImage2D can be used to update all the texels, or simply a portion of them.
To copy texels from the framebuffer, use glCopyTexSubImage2D.
glBindTexture(GL_TEXTURE_2D, textureID); //A texture you have already created with glTexImage2D glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height); //Copy back buffer to texture
Just like the case where you should use glTexSubImage2D instead of glTexImage2D, use glCopyTexSubImage2D instead of glCopyTexImage2D.
Render To Texture
To render directly to a texture, without doing a copy as above, use Framebuffer Objects.
Depth Testing Doesn't Work
You probably did not ask for a depth buffer. If you are using GLUT, glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) GLUT_DEPTH asks for a depth buffer. Be sure to enable the depth testing with glEnable(GL_DEPTH_TEST) and call glDepthFunc(GL_LEQUAL).
No Alpha in the Framebuffer
Be sure you create a double buffered context and make sure you ask for a alpha component. With GLUT, you can call glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_ALPHA | GLUT_DEPTH | GLUT_STENCIL) in which GL_ALPHA asks for a alpha component.
glFinish and glFlush
Use glFlush if you are rendering directly to your window. It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.
There are a lot of tutorial website that suggest you do this:
glFlush();
SwapBuffers();
This is unnecessary. The SwapBuffer command takes care of flushing and command processing.
What does glFlush do? It tells the driver to send all pending commands to the GPU immediately.
What does glFinish do? It tells the driver to send all pending commands to the GPU immediately and waits until all the commands have completed. This can take a lot of time.
The OpenGL specification never requires that you send a glFlush or a glFinish; all operations will execute in the order in which they were given. This even goes to accessing Buffer Objects; if the buffer object is being updated by OpenGL, the the specification requires that OpenGL automatically halt until this update is complete. You do not need to manually do a glFinish before accessing the buffer.
Therefore, you should only use glFinish when you are doing something that the specification specifically states will not be synchronous.
glDrawPixels
For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support GL_BGRA. Example:
glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
However, it is recommened that you use a texture instead and just update the texture with glTexSubImage2D.
GL_DOUBLE
glLoadMatrixd, glRotated and any other function that have to do with the double type. Most GPUs don't support GL_DOUBLE (double) so the driver will convert the data to GL_FLOAT (float) and send to the GPU. If you put GL_DOUBLE data in a VBO, the performance might even be much worst than immediate mode (immediate mode means glBegin, glVertex, glEnd). GL doesn't offer any better way to know what the GPU prefers.
Note that GL_DOUBLE may be useful in future hardware.
Unsupported formats #3
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, pixels);
Although plenty of image formats like bmp, png, jpg are by default saved as 24 bit and this can save disk space, this is not what the GPU prefers. GPUs prefer multiple of 4 bytes. The driver will convert your data to GL_RGBA8 and it will set alpha to 255. GL doesn't offer any better way to know what the GPU prefers.
Unsupported formats #4
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
The above is almost OK. The problem is the GL_RGBA for the 7th parameter. On certain platforms, the GPU prefers that red and blue be swapped (GL_BGRA).
If you supply GL_RGBA, then the driver will do the swapping for you which is slow. If you do use GL_BGRA, the call to glTexImage2D would be something like a memcpy call. Keep in mind that for the 3rd parameter, it must be kept as GL_RGBA8. This defines the textures image format; the last three parameters describe how your pixel data is stored. The image format doesn't define the order stored by the texture, so the GPU is still allowed to store it internally as BGRA.
On which platforms is BGRA8 supported? Making a list would be too long but one example is Microsoft Windows.
Swap Buffers
A modern OpenGL program should always use double buffering. A modern OpenGL program should also have a depth buffer.
Render sequence should be like this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
RenderScene();
SwapBuffers(hdc); //For Windows
In some programs, the programmer does not want to rerender the scene since the scene is heavy. He might simply call SwapBuffers (Windows) without clearing the buffer. This is risky since it might give unreliable results between different GPU/driver combination.
There are 2 options:
- You can set the rendering context correctly so that SwapBuffers copies the back buffer rather than switches it. In Windows, this means setting the dwFlags in the PIXELFORMATDESCRIPTOR to PFD_SWAP_COPY.
- Render to a framebuffer object and blit to the back buffer, then SwapBuffers.
The Pixel Ownership Problem
If your windows is covered or if it is partially covered or if window is outside the desktop area, the GPU might not render to those portions.
This is explained in the OpenGL specification. It is called undefined behavior since on some platforms/GPU/driver combinations it will work just fine and on others it will not.
The solution is to make an offscreen buffer (FBO) and render to the FBO.
Selection and Picking and Feedback Mode
A modern OpenGL program should not use the selection buffer or feedback mode. These are not 3D graphics rendering features yet they have been added to GL since version 1.0. Selection and feedback runs in software (CPU side). On some implementations, when used along with VBOs, it has been reported that performance is lousy.
A modern OpenGL program should do color picking (render each object with some unique color and glReadPixels to find out what object your mouse was on) or do the picking with some 3rd party mathematics library.
GL_POINTS and GL_LINES
This will be about the problems related to GL_POINTS and GL_LINES.
Users notice that on some implementation points or lines are rendered a little different then on others. This is because the GL spec allows some flexibility. On some implementation, when you call:
glPointSize(5.0);
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_POINT_SMOOTH);
RenderMyPoints();
the points will look nice and round, on other GPU/drivers it would look like squares.
On some implementations, when you call glEnable(GL_POINT_SMOOTH) or glEnable(GL_LINE_SMOOTH) and you use shaders at the same time, your rendering speed goes down to 0.1 FPS. This is because the driver does software rendering. This would happen on AMD/ATI GPUs/drivers.
glEnable(GL_POINT_SMOOTH), glEnable(GL_LINE_SMOOTH) and glEnable(GL_POLYGON_SMOOTH)
We don't recommend these. These are the old way of doing smoothing points/lines/polygons. They aren't directly supported by mainstream video cards and result in a slowdown when you use them.
Also, if you use glEnable(GL_POLYGON_SMOOTH), since they generate a few extra pixels around the polygon in question, using transparency (blending) would show double blending. It would appear as if the polygons have a wireframe around them.
You should use anti-aliasing also called Full Scene Anti-Aliasing (FSAA)
Color Index, The imaging subset
Section 3.6.2 of the GL specification talks about the imaging subset. glColorTable and related operations are part of this subset. They are typically not supported by common GPUs and are software emulated. It is recommended that you avoid it.
If you find that your texture memory consumption is too high, use texture compression. If you really want to use paletted color indexed textures, you can implement this yourself a texture and a shader.
Bitfield enumerators
Some OpenGL enumerators represent bits in a particular bitfield. All of these end in _BIT (before any extension suffix). Take a look at this example:
glEnable(GL_BLEND | GL_DRAW_BUFFER); // invalid
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); // valid
You can see the first line is wrong. Because neither of these enumerators ends in _BIT, they are not bitfields and thus should not be OR:ed together. By contrast, the second line is perfectly fine. All of these end in _BIT, so this makes sense.
Triple Buffering
You cannot control whether a driver does triple buffering. You could try to implement it yourself using a FBO. But if the driver is already doing triple buffering, your code will only turn it into quadruple buffering. Which is usually overkill.
Paletted textures
Support for the GL_EXT_paletted_texture extension has been dropped by the major GL vendors. If you really need paletted textures on new hardware, you may use shaders to achieve that effect.
Shader example:
//Fragment shader
#version 110
uniform sampler2D ColorTable; //256 x 1 pixels
uniform sampler2D MyIndexTexture;
varying vec2 TexCoord0;
void main()
{
//What color do we want to index?
vec4 myindex = texture2D(MyIndexTexture, TexCoord0);
//Do a dependency texture read
vec4 texel = texture2D(ColorTable, myindex.xy);
gl_FragColor = texel; //Output the color
}
ColorTable might be in a format of your choice such as GL_RGBA8. ColorTable could be a texture of 256 x 1 pixels in size.
MyIndexTexture can be in any format such as GL_R8UI (GL_R8UI is available in GL 3.0). MyIndexTexture could be of any dimension such as 64 x 32.
We read MyIndexTexture and we use this result as a texcoord to read ColorTable. If you wish to perform palette animation, or simply update the colors in the color table, you can submit new values to ColorTable with glTexSubImage2D. Assuming that the color table is in GL_RGBA format:
glBindTexture(GL_TEXTURE_2D, myColorTableID);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 256, 1, GL_BGRA, GL_UNSIGNED_BYTE, mypixels);
Notice that the format is GL_BGRA. As explain before, most GPUs prefer the BGRA format; using RGB, BGR and RGBA results in lower performance.
Texture Unit
When multitexturing was introduced, getting the number of texture units was introduced as well which you can get with :
int MaxTextureUnits;
glGetIntegerv(GL_MAX_TEXTURE_UNITS, &MaxTextureUnits);
You should not use the above because it will give a low number on modern GPUs.
Each texture unit has it's own texture environment state (glTexEnv), texture matrix, texture coordinate generation (glTexGen), texcoords (glTexCoord), clamp mode, mipmap mode, texture LOD, anisotropy.
Then came the programmable GPU. There aren't texture units anymore. Today, you have texture samplers. These are also called texture image units (TIU) which you can get with
int MaxTextureImageUnits;
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &MaxTextureImageUnits);
A TIU just stores the texture object's state, like the clamping, mipmaps, etc. They are independent of texture coordinates. You can use whatever texture coordinate to sample whatever TIU.
Note that each shader stage has its own max texture image unit count. GL_MAX_TEXTURE_IMAGE_UNITS returns the count for fragment shaders only. The number of image units across all shader stages is queried with GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS; this is the limit of the number of textures that can be bound at any one time.
For most modern hardware, the image unit count will be at least 8 for most stages. Vertex shaders used to be limited to 4 textures on older hardware. All 3.x-capable hardware will return at least 16 for all stages.
For texture coordinates, you can get the max with :
int MaxTextureCoords;
glGetIntegerv(GL_MAX_TEXTURE_COORDS, &MaxTextureCoords);
For example, on a GPU such as a Radeon 9700 or Geforce FX you would get 8 texture coordinate.
In summary, use GL_MAX_TEXTURE_IMAGE_UNITS and GL_MAX_TEXTURE_COORDS only.
Disable Depth Testing
In some cases, you might want to disable depth testing and yet, you want the depth buffer updated while you are rendering your objects. It turns out that if you disable depth testing (glDisable(GL_DEPTH_TEST)), GL also disables writes to the depth buffer. The correct solution is to tell GL to ignore the depth test results with glDepthFunc(GL_ALWAYS). Be careful because in this state, if you render a far away object last, the depth buffer will contain the values of that far object.
glGetFloatv glGetBooleanv glGetDoublev glGetIntegerv
You find that these functions are slow.
That's normal. Any function of the glGet form will likely be slow. nVidia and ATI/AMD recommend that you avoid them. The GL driver (and also the GPU) prefer to receive information in the up direction. You can avoid all glGet calls if you track the information yourself.
y-axis
In OpenGL, the y-axis is up.
For example, glReadPixels takes the x and y position. The y-axis is considered from the bottom being 0 and the top being some value. This may seem counter intuitive to some who are used to their OS having the y-axis being inverted (your window's y axis is top to bottom and your mouse's coordinates are y axis top to bottom). The solution is obvious for your mouse : windowHeight - mouseY.
For textures, GL considers the y-axis to be bottom to top, the bottom being 0.0 and the top being 1.0. Some people load their bitmap to GL texture and wonder why it appears inverted on their model. The solution is simple : invert your bitmap or invert your model's texcoord by doing 1.0 - v.
What about glOrtho? glOrtho is a function from old OpenGL. You can swap your bottom and top parameters if you want your y coordinate to be inverted.
glGenTextures in render function
It seems as if some people create a texture in their render function. Don't create resources in your render function. That goes for all the other glGen function calls as well. Don't read model files and create VBOs with them in your render function. Try to allocate resources at the beginning of your program. Release those resources when your program terminates.
Worst yet, some create textures (or any other GL object) in their render function and never call glDeleteTextures. Every time their render function gets called, a new texture is created without releasing the old one!