Common Mistakes: Difference between revisions
(→Extensions and OpenGL Versions: Improvements.) |
|||
Line 6: | Line 6: | ||
This part can be confusing to some so here is a definition. | This part can be confusing to some so here is a definition. | ||
An [[OpenGL| | An [[OpenGL Extension|extension]] is a specification for a GL feature that isn't in the GL core. It is written against some specific GL version, meaning that GL version must be supported at minimum. Usually that would be GL 1.1. Extension specs are at the GL extension registry http://www.opengl.org/registry | ||
For example, glActiveTextureARB and GL_TEXTURE0_ARB is part of the GL_ARB_multitexture extension. | For example, glActiveTextureARB and GL_TEXTURE0_ARB is part of the GL_ARB_multitexture extension. |
Revision as of 20:59, 21 November 2009
Quite a few websites show the same mistakes and the mistakes presented in their tutorials are copied and pasted by those who want to learn OpenGL.
This page has been created so that newcomers understand GL programming a little better instead of working by trial and error.
The shading language section also has its own section on common mistakes GLSL : common mistakes.
Extensions and OpenGL Versions
This part can be confusing to some so here is a definition.
An extension is a specification for a GL feature that isn't in the GL core. It is written against some specific GL version, meaning that GL version must be supported at minimum. Usually that would be GL 1.1. Extension specs are at the GL extension registry http://www.opengl.org/registry
For example, glActiveTextureARB and GL_TEXTURE0_ARB is part of the GL_ARB_multitexture extension.
If the extension is good, widely supported, and useful, a new GL version may absorb those extensions into the core of OpenGL. Sometimes this happens with no change in the functioning of the extension, but sometimes it does.
When an extension becomes a core feature, the function names lose the postfix: glActiveTextureARB becomes glActiveTexture, GL_TEXTURE0_ARB becomes GL_TEXTURE0.
In some cases, the extension API and the core API equivalents change names entirely. This was the case when changing from GL_ARB_shader_objects (GLSL support) to GL 2.0's core.
One of the possible mistakes one can make is to check for the presence of an extension, but instead using the core functions. The correct behavior is to check for the presence of the extension if you want to use the extension API, and check for the GL version if you want to use the core API.
This is complicated even more by the presence of a new form of extension: core extensions. A core extension is an ARB extension that exactly mirrors the functioning of a feature of a higher version of the OpenGL core. These extension functions and enumerators do not have a suffix, just like the core version.
The idea here is to alow the user to use the same code on prior GL versions as they do for later GL versions where the feature is core. In this case, you should check for both the version and the presence of the extension; if either is there, you can use the functionality.
The Object Oriented Language Problem
MyTexture::MyTexture(const char *pfilePath) { if(LoadFile(pfilePath)==ERROR) return; textureID=0; glGenTextures(1, &textureID); //More GL code... }
Let's assume the language used here is C++ or some similar OO language. It may seem like a good idea to "construct" your GL texture in a constructor but if there is no GL context when the constructor is called, then nothing happens. What is wrong with the next piece of code.
MyTexture::~MyTexture() { if(textureID) { glDeleteTextures(1, &textureID); textureID=0; } }
Again, if the destructor gets called after you have destroyed the GL context, then you are making a GL call while there is no GL context. You have to move your GL function calls to a better location.
Texture Upload
You create a texture and upload the pixels with glTexImage2D (or glTexImage1D, glTexImage3D) but there seems to be diagonal lines going through the image or your program crashes. This is because the scanline of your pixel array is not multiple of 4. The scanline is width * bytes. By default, glPixelStorei(GL_UNPACK_ALIGNMENT, 4) and you can change it to glPixelStorei(GL_UNPACK_ALIGNMENT, 1) if you scanline is not multiple of 4.
glReadPixels
Just like the case of "Texture Upload" written in the paragraph above, if you read a buffer with glReadPixels, you might get diagonal lines going through the image. By default, GL_PACK_ALIGNMENT is 4 which means each scanline must be a multiple of 4. If you read the buffer with a format such as GL_BGRA or GL_RGBA you won't have any problems since the scanline is already a multiple of 4. If you read it in a format such as GL_BGR or GL_RGB then you risk running into this problem.
Assume the width is 299 pixels. If we do the math, 299 pixels x 3 bytes = 897 bytes. Divide it by 4 and you get 224.25 so we know that the row is not multiple of 4. You need to call glPixelStorei(GL_PACK_ALIGNMENT, 1) in this case.
Keep in mind that GPUs can't handle bizarre formats like a 24 bit color buffer. They prefer chunks of 32 bit.
Also, glReadPixels is capable of doing conversions. For example, on Windows, the backbuffer is often stored in the GL_BGRA format. If you call glReadPixels(x, y, width, height, GL_RED, GL_UNSIGNED_BYTE, pixels), the the driver will download the buffer in its native format and then convert the data using the CPU and then memcpy into your buffer.
Another example, glReadPixels(x, y, width, height, GL_RGBA, GL_FLOAT, pixels), again, the driver will download the buffer in its native format and then convert the data using the CPU and then memcpy into your buffer.
As we mentioned, most likely the backbuffer format is GL_BGRA and the most optimal form is glReadPixels(x, y, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels)
You know the format of the backbuffer with various means : DescribePixelFormat (Windows only), glGetIntegerv(GL_RED_BITS, &RedBits), glGetIntegerv(GL_GREEN_BITS, &GreenBits), glGetIntegerv(GL_BLUE_BITS, &BlueBits) glGetIntegerv(GL_ALPHA_BITS, &AlphaBits)
The glGetIntegerv method doesn't tell you the order of the colors : BGRA
If you want to read the depth buffer, the same issue comes up. Often, the depth buffer is stored in the D24S8 format and it is an integer format. We always recommend creating a stencil buffer even if you don't need it. GPUs also support 16 bit depth and 0 bit stencil. They may or may not support 32 bit depth and 0 bit stencil. All other combinations are not supported by todays and yesterdays GPUs.
How to read a D24S8 depth buffer? There is an extension called GL_EXT_packed_depth_stencil and modern GPUs support it. http://www.opengl.org/registry/specs/EXT/packed_depth_stencil.txt
Here is how you need to call it glReadPixels(x, y, width, height, GL_DEPTH_STENCIL_EXT, GL_UNSIGNED_INT_24_8_EXT, pixels)
That extension became core in GL 3.0. glReadPixels(x, y, width, height, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, pixels)
When you call glReadPixels, you don't need to call glFlush or glFinish right after. Calling glReadPixels is like calling glFinish. It will wait until all render is complete, it will wait until the buffer is copied to your array.
Texture Precision
You call glTexImage2D(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels) and you set X to 1, 2, 3, 4.
These are GL 1.0 formats and should not be used anymore by a modern OpenGL program.
You should set it to internal format such as GL_RGBA8 or some other "internal precision" format.
The GL specification has a table of valid values such as GL_RGBA8, GL_ALPHA8. See table 3.17
You can also consult http://www.opengl.org/sdk/docs/man/
It is possible that your GPU doesn't support the format that you have chosen. In this case, the driver will convert the data to an appropriate closely matching format and is not suppose to reduce the quality.
Calling
int value; glGetTexLevelParameteriv(GL_TEXTURE_2D, GL_TEXTURE_COMPONENTS, &value)
should return the real internal format.
Creating a Texture
What's wrong with this code?
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
The texture won't work because it is incomplete. The default GL_TEXTURE_MIN_FILTER state is GL_LINEAR_MIPMAP_NEAREST so GL will consider the texture incomplete as long as you don't create the mipmaps.
This is better because it sets up some of the important texture object states:
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
If you want mipmaps: OpenGL 1.4 is required for support for automatic mipmap generation. GL_GENERATE_MIPMAP is part of the texture object now and it is a flag (TRUE or FALSE). Whenever texture level 0 is updated, the mipmaps will all be regenerated.
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
When GL_EXT_framebuffer_object is present, instead of using the GL_GENERATE_MIPMAP flag, you can use glGenerateMipmapEXT.
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels); glGenerateMipmapEXT(GL_TEXTURE_2D); //Generate mipmaps now!!!
It has been reported that on some ATI drivers, glGenerateMipmapEXT(GL_TEXTURE_2D) has no effect except if you proceed it with a call to glEnable(GL_TEXTURE_2D) in this particular case. Once again, to be clear, call glTexImage2D, then glEnable, then glGenerateMipmapEXT.
However, for RTT (Render To Texture), you don't need glEnable(GL_TEXTURE_2D).
This is considered a bug and has been in the ATI drivers for a while. Perhaps by the time you read this, it has been corrected.
On nVidia, the drivers work correctly. They actually generate the mipmaps when you call glGenerateMipmapEXT and no need for glEnable(GL_TEXTURE_2D).
In order to not cause problems for your users, we suggest you continue to use GL_GENERATE_MIPMAP for your GL 2.1 program when making a standard texture and use glGenerateMipmapEXT for your RTTs.
In GL 3.0, GL_GENERATE_MIPMAP is considered deprecated. You must use glGenerateMipmap.
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels); glGenerateMipmap(GL_TEXTURE_2D); //Generate mipmaps now!!!
If you want to allocate a texture but not initialize texels, the last parameter should be NULL. The "format" and "type" don't matter. What matters is the internal format, which in this example is GL_RGBA8
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
And in the end, cleanup
glDeleteTextures(1, &textureID);
Creating a Texture #2, glTexEnvi
Since a lot of tutorials call glTexEnvi when they create a texture, quite a few people end up thinking that the texture environment state is part of the texture object.
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
States such as GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T, GL_TEXTURE_MAG_FILTER, GL_TEXTURE_MIN_FILTER are part of the texture object.
glTexEnv is part of the texture image unit (TIU).
When you set this it will effect any texture attached to the TIU and it only has affect during rendering.
You can select a TIU with glActiveTexture(GL_TEXTURE0+i).
Also keep in mind that glTexEnvi has no effect when a fragment shader is bound.
And in the end, cleanup
glDeleteTextures(1, &textureID);
gluBuild2DMipmaps
GLU has become a tradition in OpenGL programming and so you will see it used often in old code. You will see it in new code as well since newcomers use those old tutorials to learn.
GL 1.4 introduces the GL_GENERATE_MIPMAP flag (TRUE or FALSE) and by default it is FALSE.
For a 2D texture, you could do
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); //The flag is set to TRUE glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels); //When this is called, the GPU generates all mipmaps
It also can be used for 3D and cubemap textures. It is highly recommended you use GL_GENERATE_MIPMAP instead of gluBuild2DMipmaps, since gluBuild2DMipmaps is executed on the CPU and is quite slow. You will notice the performance problem when loading a lot of textures.
Additional:
In GL 3.0, it is recommended to forget about GL_GENERATE_MIPMAP and use glGenerateMipmap
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels); glGenerateMipmap(GL_TEXTURE_2D); //Generate mipmaps now!
Creating a Cubemap Texture
It's best to set the wrap mode to GL_CLAMP_TO_EDGE and not the other formats. Don't forget to define all 6 faces else the texture is considered incomplete. Don't forget to setup GL_TEXTURE_WRAP_R because cubemaps require 3D texture coordinates.
Example:
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_CUBE_MAP, textureID); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_GENERATE_MIPMAP, GL_TRUE); //Define all 6 faces glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+0, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face0); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+1, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face1); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+2, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face2); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+3, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face3); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+4, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face4); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+5, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face5);
and the GL 3 method uses glGenerateMipmap, not the GL_GENERATE_MIPMAP flag. GL_GENERATE_MIPMAP is considered deprecated.
glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_CUBE_MAP, textureID); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //Define all 6 faces glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+0, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face0); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+1, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face1); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+2, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face2); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+3, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face3); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+4, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face4); glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+5, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face5); glGenerateMipmap(GL_TEXTURE_CUBE_MAP); //Generate mipmaps now!!!
And in the end, cleanup
glDeleteTextures(1, &textureID);
Texture Border Color Problem
When you have a 2D or 3D or Cubemap texture and you want to clamp the texture coordinates, if you use
glTexParameteri(GL_TEXTURE_X, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_X, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexParameteri(GL_TEXTURE_X, GL_TEXTURE_WRAP_R, GL_CLAMP); //For 3D textures and cubemaps
then when sampling takes place at the edges of the texture, it will filter with border color so you might see black edges.
By default, the border color is black.
Instead of GL_CLAMP, use GL_CLAMP_TO_EDGE.
Updating A Texture
In case you don't want to use Render_To_Texture, you will be just refreshing the texels either from main memory or from the framebuffer.
Case 1:
Refreshing texels from main memory.
glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); //Texels not initialized since we passed NULL //--------------------- glBindTexture(GL_TEXTURE_2D, textureID); //A texture you have already created with glTexImage2D glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Notice that glTexSubImage2D was used and not glTexImage2D.
The difference is that glTexSubImage2D just updates texels and glTexImage2D deletes previous texture and reallocate and sets up texels.
glTexImage2D is the slower solution.
glTexSubImage2D can be used to update all the texels. Also, make sure that the format you supply is the same stored on the GPU else GL will convert the data format which will lead to performance issues. For example, GL_BGRA is a natively supported format by most GPUs. Consult IHV documenation for formats supported. It's not possible to know from GL what formats are natively supported.
Case 2:
Refreshing texels from the framebuffer.
glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL); //Texels not initialized since we passed NULL //--------------------- RenderObjects(); //Assuming we are rendering to the backbuffer. Do not call SwapBuffers at this point glBindTexture(GL_TEXTURE_2D, textureID); //A texture you have already created with glTexImage2D glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height); //Copy back buffer to texture //--------------------- SwapBuffers(hdc); //Now that we copied result to texture, swap buffers. Back buffer now contains undefined result.
Just like the case where you should use glTexSubImage2D instead of glTexImage2D, use glCopyTexSubImage2D instead of glCopyTexImage2D.
Render To Texture
If you want to render_to_texture (RTT) via the GL_EXT_framebuffer_object extension, quite a few people make the same mistake as explained above for the case of "Creating a Texture". They leave glTexParameteri in the default state yet they don't define mipmaps. If you want mipmaps, in this case, once the texture is created (glTexImage2D(....., NULL)), then call glGenerateMipmapEXT(GL_TEXTURE_2D) or glGenerateMipmapEXT(GL_TEXTURE_3D) or glGenerateMipmapEXT(GL_TEXTURE_CUBE_MAP).
If you don't, then you get GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT_EXT or GL_FRAMEBUFFER_UNSUPPORTED_EXT.
For more info and sample code, see
- http://www.opengl.org/wiki/GL_EXT_framebuffer_object
- http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
For GL 3.0, FBO is a core functionality. You can read about it at http://www.opengl.org/wiki/Framebuffer_Objects
Depth Testing Doesn't Work
You probably did not ask for a depth buffer. If you are using GLUT, glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) GLUT_DEPTH asks for a depth buffer. Be sure to enable the depth testing with glEnable(GL_DEPTH_TEST) and call glDepthFunc(GL_LEQUAL).
No Alpha in the Framebuffer
Be sure you create a double buffered context and make sure you ask for a alpha component. With GLUT, you can call glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) in which GL_RGBA asks for a alpha component.
glFinish and glFlush
Use glFlush if you are rendering directly to your window. It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.
There is a lot of tutorial website that show this
glFlush(); SwapBuffers();
Never call glFlush before calling SwapBuffers. The SwapBuffer command takes care of flushing and command processing.
What does glFlush do? It tells the driver to send all pending commands to the GPU immediatly. This can actually reduce performance.
What does glFinish do? It tells the driver to send all pending commands to the GPU immediatly and waits until all the commands are processed by the GPU. This can take a lot of time.
A modern OpenGL program should NEVER use glFlush or/and glFinish.
Certain benchmark software might use glFinish.
glDrawPixels
For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support GL_BGRA. Example:
glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
However, it is recommened that you use a texture instead and just update the texture with glTexSubImage2D.
glEnableClientState(GL_INDEX_ARRAY)
What's wrong with this code?
glBindBuffer(GL_ARRAY_BUFFER, vboid); glVertexPointer(3, GL_FLOAT, sizeof(vertex_format), 0); glTexCoordPointer(2, GL_FLOAT, sizeof(vertex_format), 12); glNormalPointer(GL_FLOAT, sizeof(vertex_format), 20); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_INDEX_ARRAY); glBindBuffer(GL_ELEMENT_ARRAY, iboid); glDrawRangeElements(....);
The problem is that GL_INDEX_ARRAY is not understood by the programmer.
GL_INDEX_ARRAY has nothing to do with indices for your glDrawRangeElements.
This is for color index arrays. A modern OpenGL program should not used color index arrays. Do not use glIndexPointer. If you need colors, use the color array. This array should be filled be RGBA data.
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertex_format), X); glEnableClientState(GL_COLOR_ARRAY);
glInterleavedArrays
This function should not be used by modern GL programs. If you want to have interleaved arrays, use the corresponding gl****Pointer calls.
Example:
struct MyVertex { float x, y, z; //Vertex float nx, ny, nz; //Normal float s0, t0; //Texcoord0 float s1, s2; //Texcoord1 }; //----------------- glVertexPointer(3, GL_FLOAT, sizeof(MyVertex), offset); glNormalPointer(GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*3); glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*6); glClientActiveTexture(GL_TEXTURE1); glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*8);
Unsupported formats #1
glLoadMatrixd, glRotated and any other function that have to do with the double type. Most GPUs don't support GL_DOUBLE (double) so the driver will convert the data to GL_FLOAT (float) and send to the GPU. If you put GL_DOUBLE data in a VBO, the performance might even be much worst than immediate mode (immediate mode means glBegin, glVertex, glEnd). GL doesn't offer any better way to know what the GPU prefers.
Unsupported formats #2
glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(vertex_format), X);
The problem is that most GPUs can't handle 3 bytes. They prefer multiple of 4. You should add the alpha.
The same can be said for glColor3ub and the other "3" component color functions. It's possible that "3" component float is ok for your GPU.
You need to consult the IHV's documents or perhaps do benchmarking on your own because GL doesn't offer any better way to know what the GPU prefers.
Unsupported formats #3
glTexImage2D(GL_TEXTURE2D, 0, GL_RGB8, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, pixels);
Although plenty of image formats like bmp, png, jpg are by default saved as 24 bit and this can save disk space, this is not what the GPU prefers. GPUs prefer multiple of 4 bytes. The driver will convert your data to GL_RGBA8 and it will set alpha to 255. GL doesn't offer any better way to know what the GPU prefers.
Unsupported formats #4
glTexImage2D(GL_TEXTURE2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
The above is almost OK. The problem is the GL_RGBA. On certain platforms, the GPU prefers that red and blue be swapped (GL_BGRA).
If you supply GL_RGBA, then the driver will do the swapping for you which is slow.
On which platforms? Making a list would be too long but one example is x86+Windows and x64+Windows.
Swap Buffers
A modern OpenGL program should always use double buffering.
A modern OpenGL program should also have a depth buffer and stencil buffer, probably of D24S8 format in order to have fast clears (glClear).
Render sequence should be like this
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); RenderScene(); SwapBuffers(hdc); //For Windows
In some programs, the programmer does not want to rerender the scene since the scene is heavy. He might simply call SwapBuffers (Windows) without clearing the buffer. This is risky since it might give unreliable results between different GPU/driver combination.
There are 2 options:
1. For the PIXELFORMATDESCRIPTOR, you can add PFD_SWAP_COPY to your dwFlags.
2. Render to a FBO and blit to the back buffer, then SwapBuffers.
See GL_EXT_framebuffer_object and GL_EXT_framebuffer_blit at www.opengl.org/registry
For more info and sample code, see
- http://www.opengl.org/wiki/GL_EXT_framebuffer_object
- http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
The Pixel Ownership Problem
If your windows is covered or if it is partially covered or if window is outside the desktop area, the GPU might not render to those portions.
This is explained in the OpenGL specification. It is called undefined behavior since on some platforms/GPU/driver combinations it will work just fine and on others it will not.
The solution is to make an offscreen buffer (FBO) and render to the FBO.
See GL_EXT_framebuffer_object at www.opengl.org/registry
For more info and sample code, see
- http://www.opengl.org/wiki/GL_EXT_framebuffer_object
- http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
glAreTexturesResident and Video Memory
glAreTexturesResident doesn't necessarily return the value that you think it should return. On some implementations, it would return always TRUE and on others, it returns TRUE when it's loaded into video memory. A modern OpenGL program should not use this function.
If you need to find out how much video memory your video card has, you need to ask the OS. GL doesn't provide a function since GL is intended to be multiplatform and on some systems, there is no such thing as a GPU and video memory.
Even if your OS tells you how much VRAM there is, it's difficult for an application to predict what it should do. It is better to offer the user a feature in your program that let's him controls "quality".
ATI/AMD created GL_ATI_meminfo. This extension is very easy to use. You basically need to call glGetIntegerv with the appropriate token values.
http://www.opengl.org/registry/specs/ATI/meminfo.txt
Selection and Picking and Feedback Mode
A modern OpenGL program should not use the selection buffer or feedback mode. These are not 3D graphics rendering features yet they have been added to GL since version 1.0. Selection and feedback runs in software (CPU side). On some implementations, when used along with VBOs, it has been reported that performance is lousy.
A modern OpenGL program should do color picking (render each object with some unique color and glReadPixels to find out what object your mouse was on) or do the picking with some 3rd party mathematics library.
GL_POINTS and GL_LINES
This will be about the problems related to GL_POINTS and GL_LINES.
Users notice that on some implementation points or lines are rendered a little different then on others. This is because the GL spec allows some flexibility. On some implementation, when you call
glPointSize(5.0); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); glEnable(GL_POINT_SMOOTH); RenderMyPoints();
the points will look nice and round, on other GPU/drivers it would look like squares.
Keep in mind that common gaming GPUs don't support point size larger than 1 pixel. They emulate larger points with quads.
The same applies to GL_LINES. Common gaming GPUs don't support line size larger than 1 pixel. They emulate larger lines with quads.
On some implementations, when you call glEnable(GL_POINT_SMOOTH) or glEnable(GL_LINE_SMOOTH) and you use shaders at the same time, your rendering speed goes down to 0.1 FPS. This is because the driver does software rendering. This would happen on AMD/ATI GPUs/drivers.
Keep in mind that the above problems are specific to common gaming GPUs. Workstation GPUs might have GPUs that support real GL_POINTS and real GL_LINES.
Color Index, The imaging subset
Section 3.6.2 of the GL specification talks about the imaging subset. glColorTable and related operations are part of this subset. They are typically not supported by common GPUs and are software emulated. It is recommended that you avoid it. Instead, always use 32 bit textures.
If you find that the memory consumption is too high, use DXT1, DXT3 or DXT5 texture compression. See http://www.opengl.org/registry/specs/S3/s3tc.txt for more details. There is also this page on this WIKI http://www.opengl.org/wiki/Textures_-_compression
The other method is to do the indexing yourself using a texture and a shader.
Or
What's wrong with this code?
glPushAttrib(GL_BLEND | GL_DRAW_BUFFER);
you have to be careful on what you give to glPushAttrib. The documents don't list GL_BLEND and GL_DRAW_BUFFER as valid parameters. glGetError() would return an error code. Also, GL_BLEND and GL_DRAW_BUFFER are not ORable.
What about
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
are those ORable? Yes, they are.
What about
glPushAttrib(GL_COLOR_BUFFER_BIT | GL_CURRENT_BIT);
are those ORable? Yes, they are.
What about
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, GL_RED | GL_GREEN | GL_BLUE | GL_ALPHA, GL_UNSIGNED_BYTE, pixels);
are those ORable? No. Use GL_RGBA or GL_BGRA instead.
Triple Buffering
This is actually a common question. How can you enable tripple buffering with GL? The answer is that you have no control. Since tripple buffering can be beneficial, some drivers enable it by default. Some drivers offer the possibility to disable it through the control panel of your OS.
Perhaps this one should be moved to the FAQ.
Palette
This should probably go into the FAQ.
There is an extension called GL_EXT_paletted_texture http://www.opengl.org/registry/specs/EXT/paletted_texture.txt
which exposes glColorTableEXT and a few other functions that are for making a color table and then you can make a texture full of indices that are used to reference this table.
Support for this extension has been dropped a long time ago. nVidia and ATI/AMD don't support it.
Usually people who need palette support are people rewritting very old games with OpenGL.
One solution is to use shaders like this.
//Fragment shader uniform sampler2D ColorTable; uniform sampler2D MyIndexTexture; varying vec2 TexCoord0; void main() { //What color do we want to index? vec4 myindex = texture2D(MyIndexTexture, TexCoord0); //Do a dependency texture read vec4 texel = texture2D(ColorTable, myindex.xy); gl_FragColor = texel; //Output the color }
ColorTable might be in a format of your choice such as GL_RGBA8.
ColorTable could be a 256 x 1 texture.
MyIndexTexture can be in a format such as GL_LUMINANCE8.
MyIndexTexture could be of any dimension such as 64 x 32.
We read MyIndexTexture and we use this result as a texcoord to read ColorTable. This is called a dependency texture read operation.
If you want to animate the texture, you submit new values to ColorTable with glTexSubImage2D.
glBindTexture(GL_TEXTURE_2D, myColorTableID); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 256, 1, GL_BGRA, GL_UNSIGNED_BYTE, mypixels);