Difference between revisions of "Common Mistakes"

From OpenGL Wiki
Jump to: navigation, search
m (made variables lowercase)
(Texture upload and pixel reads: Add a note on how to work with RGB data if necessary.)
 
(122 intermediate revisions by 16 users not shown)
Line 1: Line 1:
Quite a few websites show the same mistakes and the mistakes presented in their tutorials are copied and pasted by those who want to learn OpenGL.<br>
+
Quite a few websites show the same mistakes and the mistakes presented in their tutorials are copied and pasted by those who want to learn OpenGL. This page has been created so that newcomers understand GL programming a little better instead of working by trial and error.
This page has been created so that newcomers understand GL programming a little better instead of working by trial and error.<br>
 
The shading language section also has its own section on common mistakes [[GLSL : common mistakes]].
 
  
=== Extensions and OpenGL Versions===
+
__TOC__
This part can be confusing to some so here is a definition.<br>
 
An extension is a specification for a GL feature that isn't in the GL core. It is written against some specific GL version, meaning that GL version must be supported at minimum. Usually that would be GL 1.1. Extension specs are at the GL extension registry http://www.opengl.org/registry<br>
 
For example, glActiveTextureARB and GL_TEXTURE0_ARB is part of GL_ARB_multitexture.<br>
 
Sometimes, a new GL version is released and extensions are absorbed into the core, sometimes with no functionality change, sometimes with changes.<br>
 
When this happens, the function names usually lose the postfix: glActiveTextureARB becomes glActiveTexture, GL_TEXTURE0_ARB becomes GL_TEXTURE0.<br>
 
Sometimes, the names change completly, as was the case for the GLSL suppport. The GL spec defined what functions are core. See the appendix for a list of extensions that made it into the core.
 
  
The mistake made is that you check if an extension is present and use the core functions. This would result in a crash on some users systems if the correct GL version is not supported but the extensions are supported. For example, the GL version is 1.4, you check if GL_ARB_vertex_buffer_object is supported, and you use glGenBuffers. glGenBuffers is not available in GL 1.4. It became available in GL 1.5
+
There are also other articles explaining common mistakes:
 +
*[[GLSL : common mistakes|Common Mistakes in GLSL]]
 +
*[[Unexpected Results]] you can get when using OpenGL
 +
*Mistakes related to measuring [[Performance]]
 +
*[[Common Mistakes: Deprecated| Common Mistakes]] when using deprecated functionality.
  
Another example: this website http://cirl.missouri.edu/gpu/glsl_lessons/glsl_geometry_shader/index.html<br>
+
== Extensions and OpenGL Versions==
checks if GL 2.1 is supported, then it checks if GL_ARB_vertex_shader and GL_ARB_fragment_shader are present,
+
One of the possible mistakes related to this is to check for the presence of an [[extension]], but instead using the corresponding core functions. The correct behavior is to check for the presence of the extension if you want to use the extension API, and check the GL version if you want to use the core API. In case of a [[OpenGL Extensions#Core Extensions|core extension]], you should check for both the version and the presence of the extension; if either is there, you can use the functionality.
then it uses glCreateProgram, glCompileShader, glAttachShader. That doesn't make a lot of sense. GL 2.0 has those functions in the core. You don't need to check for GL_ARB_vertex_shader and GL_ARB_fragment_shader.
 
  
Another example: Is glBindBufferARB an extension or a core GL function? Answer: It is an extension, it is part of GL_ARB_vertex_buffer_object.<br>
+
== The Object Oriented Language Problem ==
What about glBindBuffer? Answer: this is a core GL function, part of GL 1.5 and above.
 
  
One way to know when a GL function became available is to read the documentation. It is called the GL specification : http://www.opengl.org/documentation/specs
+
In an object-oriented language like C++, it is often useful to have a class that wraps an [[OpenGL Object]]. For example, one might have a texture object that has a constructor and a destructor like the following:
  
Another way is to look inside glext.h. You will see that tokens and function prototypes are listed under a certain GL version. Example : GL_VERSION_2_0.
+
<source lang="cpp">
 +
MyTexture::MyTexture(const char *pfilePath)
 +
{
 +
  if(LoadFile(pfilePath)==ERROR)
 +
return;
 +
  textureID=0;
 +
  glGenTextures(1, &textureID);
 +
  //More GL code...
 +
}
  
In certain programming languages such as Java, where everything must be object oriented, the code looks like this GL11.glDrawElements() so that already tells you the version number.
+
MyTexture::~MyTexture()
 +
{
 +
  if(textureID)
 +
glDeleteTextures(1, &textureID);
 +
}
 +
</source>
  
=== The Object Oriented Language Problem ===
+
There is an issue with doing this. OpenGL functions do not work unless an [[OpenGL Context]] has been created and is active within that thread. Thus, {{apifunc|glGenTextures}} will not work correctly before context creation, and {{apifunc|glDeleteTextures}} will not work correctly after context destruction.
  MyTexture::MyTexture(const char *pfilePath)
 
  {
 
      if(LoadFile(pfilePath)==ERROR)
 
        return;
 
      textureID=0;
 
      glGenTextures(1, &textureID);
 
      //More GL code...
 
  }
 
  
Let's assume the language used here is C++ or some similar OO language. It may seem like a good idea to "construct" your GL texture in a constructor but if there is no GL context when the constructor is called, then nothing happens. What is wrong with the next piece of code.
+
This problem usually manifests itself with constructors, when a user creates a texture object or similar OpenGL object wrapper at global scope. There are several potential solutions:
  MyTexture::~MyTexture()
 
  {
 
      if(textureID)
 
      {
 
        glDeleteTextures(1, &textureID);
 
        textureID=0;
 
      }
 
  }
 
Again, if the destructor gets called after you have destroyed the GL context, then you are making a GL call while there is no GL context.
 
You have to move your GL function calls to a better location.
 
  
=== Texture Upload ===
+
# Do not use constructors/destructors to initialize/destroy OpenGL objects. Instead, use member functions of these classes for these purposes. This violates RAII principles, so this is not the best course of action.
You create a texture and upload the pixels with glTexImage2D (or glTexImage1D, glTexImage3D) but there seems to be diagonal lines going through the image or your program crashes. This is because the scanline of your pixel array is not multiple of 4. The scanline is width * bytes. By default, glPixelStorei(GL_UNPACK_ALIGNMENT, 4) and you can change it to glPixelStorei(GL_UNPACK_ALIGNMENT, 1) if you scanline is not multiple of 4.
+
# Have your OpenGL object constructors throw an exception if a context has not been created yet. This requires an addition to your context creation functionality that tells your code when a context has been created and is active.
 +
# Create a class that owns all other OpenGL related objects. This class should also be responsible for creating the context in its constructor.
 +
# Allow your program to crash if objects are created/destroyed when a context is not current. This puts the onus on the user to correctly use them, but it also makes their working code seem more natural.
  
=== glReadPixels ===
+
=== RAII and hidden destructor calls ===
Just like the case of "Texture Upload" written in the paragraph above, if you read a buffer with glReadPixels, you might get diagonal lines going through the image. By default, GL_PACK_ALIGNMENT is 4 which means each scanline must be a multiple of 4. If you read the buffer with a format such as GL_BGRA or GL_RGBA you won't have any problems since the scanline is already a multiple of 4. If you read it in a format such as GL_BGR or GL_RGB then you risk running into this problem.
 
  
Assume the width is 299 pixels. If we do the math, 299 pixels x 3 bytes = 897 bytes. Divide it by 4 and you get 224.25 so we know that the row is not multiple of 4. You need to call glPixelStorei(GL_PACK_ALIGNMENT, 1) in this case.
+
The [http://en.cppreference.com/w/cpp/language/raii C++ principle of RAII] says that if an object encapsulates a resource (like an [[OpenGL Object]]), the constructor should create the resource and the destructor should destroy it. This seems good:
  
Keep in mind that GPUs can't handle bizarre formats like a 24 bit color buffer. They prefer chunks of 32 bit.
+
<source lang="cpp">
 +
//Do OpenGL context creation.
 +
{
 +
  MyTexture tex;
  
Also, glReadPixels is capable of doing conversions. For example, on Windows, the backbuffer is often stored in the GL_BGRA format. If you call glReadPixels(x, y, width, height, GL_RED, GL_UNSIGNED_BYTE, pixels), the the driver will download the buffer in its native format and then convert the data using the CPU and then memcpy into your buffer.
+
  RunGameLoop(&tex); //Use the texture in several iterations.
 +
} //Destructor for `tex` happens here.
 +
</source>
  
Another example, glReadPixels(x, y, width, height, GL_RGBA, GL_FLOAT, pixels), again, the driver will download the buffer in its native format and then convert the data using the CPU and then memcpy into your buffer.
+
The problem happens when you want to pass it around, or are creating it within a C++ container like {{code|vector}}. Consider this function:
  
As we mentioned, most likely the backbuffer format is GL_BGRA and the most optimal form is glReadPixels(x, y, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels)
+
<source lang="cpp">
 +
MyTexture CreateTexture()
 +
{
 +
  MyTexture tex;
  
You know the format of the backbuffer with various means : DescribePixelFormat (Windows only),
+
  //Initialize `tex` with data.
glGetIntegerv(GL_RED_BITS, &RedBits), glGetIntegerv(GL_GREEN_BITS, &GreenBits), glGetIntegerv(GL_BLUE_BITS, &BlueBits) glGetIntegerv(GL_ALPHA_BITS, &AlphaBits)
 
  
The glGetIntegerv method doesn't tell you the order of the colors : BGRA
+
  return tex;
 +
}
 +
</source>
  
If you want to read the depth buffer, the same issue comes up. Often, the depth buffer is stored in the D24S8 format and it is an integer format.
+
What happens here? By the rules of C++, {{code|tex}} ''will be destroyed'' at the conclusion of this function call. What is returned is not {{code|tex}} itself, but a ''copy'' of this object. But {{code|tex}} managed a resource: an OpenGL object. And that resource will be destroyed by the destructor.
We always recommend creating a stencil buffer even if you don't need it. GPUs also support 16 bit depth and 0 bit stencil. They may or may not support 32 bit depth and 0 bit stencil. All other combinations are not supported by todays and yesterdays GPUs.
 
  
How to read a D24S8 depth buffer? There is an extension called GL_EXT_packed_depth_stencil and modern GPUs support it. http://www.opengl.org/registry/specs/EXT/packed_depth_stencil.txt
+
The copy that gets returned will therefore have an OpenGL object name that ''has been destroyed''.
  
Here is how you need to call it glReadPixels(x, y, width, height, GL_DEPTH_STENCIL_EXT, GL_UNSIGNED_INT_24_8_EXT, pixels)
+
This happens because we violated [http://en.cppreference.com/w/cpp/language/rule_of_three C++'s rule of 3/5]: if you write for a class one of a destructor, copy/move constructor, or copy/move assignment operator, then you must write ''all of them''.
  
That extension became core in GL 3.0. glReadPixels(x, y, width, height, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, pixels)
+
The compiler-generated copy constructor is wrong; it copies the OpenGL object name, not the OpenGL object itself. This leaves two C++ objects which each intend to destroy the same OpenGL object.
  
When you call glReadPixels, you don't need to call glFlush or glFinish right after. Calling glReadPixels is like calling glFinish. It will wait until all render is complete, it will wait until the buffer is copied to your array.
+
Ideally, copying a RAII wrapper should cause a copy of the OpenGL object's ''data'' into a new OpenGL object. This would leave each C++ object with its own unique OpenGL object. However, copying an OpenGL object's data to a new object is ''incredibly'' expensive; it is also essentially impossible to do, thanks to the ability of extensions to add state that you might not statically know about.
  
 +
So instead, we should ''forbid'' copying of OpenGL wrapper objects. Such types should be move-only types; on move, we steal the resource from the moved-from object.
  
 +
<source lang="cpp">
  
=== Texture Precision ===
+
class MyTexture
You call glTexImage2D(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels) and you set X to 1, 2, 3, 4.<br>
+
{
These are GL 1.0 formats and should not be used anymore by a modern OpenGL program.<br>
+
private:
You should set it to internal format such as GL_RGBA8 or some other "internal precision" format.<br>
+
  GLuint obj_ = 0; //Cannot leave this uninitialized.
The GL specification has a table of valid values such as GL_RGBA8, GL_ALPHA8. See table 3.17<br>
 
You can also consult http://www.opengl.org/sdk/docs/man/<br>
 
It is possible that your GPU doesn't support the format that you have chosen. In this case, the driver will convert the data to an appropriate closely matching format and is not suppose to reduce the quality.<br>
 
Calling
 
  int value;
 
  glGetTexLevelParameteriv(GL_TEXTURE_2D, GL_TEXTURE_COMPONENTS, &value)
 
  
should return the real internal format.
+
  void Release()
 +
  {
 +
    glDeleteTextures(1, &obj_);
 +
    obj_ = 0;
 +
  }
 +
 
 +
public:
 +
  //Other constructors as normal.
 +
 
 +
  //Free up the texture.
 +
  ~MyTexture() {Release();}
 +
 
 +
  //Delete the copy constructor/assignment.
 +
  MyTexture(const MyTexture &) = delete;
 +
  MyTexture &operator=(const MyTexture &) = delete;
 +
 
 +
  MyTexture(MyTexture &&other) : obj_(other.obj_)
 +
  {
 +
    other.obj_ = 0; //Use the "null" texture for the old object.
 +
  }
 +
 
 +
  MyTexture &operator=(MyTexture &&other)
 +
  {
 +
    //ALWAYS check for self-assignment.
 +
    if(this != &other)
 +
    {
 +
      Release();
 +
      //obj_ is now 0.
 +
      std::swap(obj_, other.obj_);
 +
    }
 +
  }
 +
};
 +
</source>
 +
 
 +
Now, the above code can work. {{code|return tex;}} will provoke a move from {{code|tex}}, which will leave {{code|tex.obj_}} as zero right before it is destroyed. And it's OK to call {{apifunc|glDeleteTextures}} with a 0 texture.
 +
 
 +
=== OOP and hidden binding===
 +
There's another issue when using OpenGL with languages like c++. Consider the following function:
 +
 
 +
<source lang="cpp">
 +
void MyTexture::TexParameter(GLenum pname, GLint param)
 +
{
 +
    glBindTexture(GL_TEXTURE_2D, textureID);
 +
    glTexParameteri(GL_TEXTURE_2D, pname, param);
 +
}
 +
</source>
 +
 
 +
The problem is that the binding of the texture is hidden from the user of the class. There may be performance implications for doing repeated binding of objects (especially since the API may not seem heavyweight to the outside user). But the major concern is correctness; the bound objects are ''global state'', which a local member function now has changed.
 +
 
 +
This can cause many sources of hidden breakage. The safe way to implement this is as follows:
 +
 
 +
<source lang="cpp">
 +
void MyTexture::TexParameter(GLenum pname, GLint param)
 +
{
 +
    GLuint boundTexture = 0;
 +
    glGetIntegerv(GL_TEXTURE_BINDING_2D, (GLint*) &boundTexture);
 +
    glBindTexture(GL_TEXTURE_2D, textureID);
 +
    glTexParameteri(GL_TEXTURE_2D, pname, param);
 +
    glBindTexture(GL_TEXTURE_2D, boundTexture);
 +
}
 +
</source>
 +
 
 +
Note that this solution emphasizes correctness over ''performance''; the {{apifunc|glGetIntegerv}} call may not be particularly fast.
 +
 
 +
A more effective solution is to use [[Direct State Access]], which requires {{require|4.5|direct_state_access}}, or the older {{extref|direct_state_access|EXT}} extension:
 +
 
 +
<source lang="cpp">
 +
void MyTexture::TexParameter(GLenum pname, GLint param)
 +
{
 +
    glTextureParameteri(textureID, GL_TEXTURE_2D, pname, param);
 +
}
 +
</source>
 +
 
 +
== Texture upload and pixel reads ==
 +
You create [[Texture Storage|storage for]] a [[Texture]] and [[Pixel Transfer|upload pixels to it]] with {{apifunc|glTexImage2D}} (or [[Mutable Texture Storage|similar functions, as appropriate to the type of texture]]). If your program crashes during the upload, or diagonal lines appear in the resulting image, this is because the [[Pixel Transfer Layout|alignment of each horizontal line of your pixel array is not multiple of 4]]. This typically happens to users loading an image that is of the RGB or BGR format (for example, 24 BPP images), depending on the source of your image data.
 +
 
 +
Example, your image width = 401 and height = 500. The height is irrelevant; what matters is the width. If we do the math, 401 pixels x 3 bytes = 1203, which is not divisible by 4. Some image file formats may inherently align each row to 4 bytes, but some do not. For those that don't, each row will start exactly 1203 bytes from the start of the last. OpenGL's row alignment can be changed to fit the row alignment for your image data. This is done by calling {{apifunc|glPixelStore|i(GL_UNPACK_ALIGNMENT, #)}}, where # is the alignment you want. The default alignment is 4.
 +
 
 +
And if you are interested, most GPUs like chunks of 4 bytes. In other words, {{enum|GL_RGBA}} or {{enum|GL_BGRA}} is preferred when each component is a byte. {{enum|GL_RGB}} and {{enum|GL_BGR}} is considered bizarre since most GPUs, most CPUs and any other kind of chip don't handle 24 bits. This means, the driver converts your {{enum|GL_RGB}} or {{enum|GL_BGR}} to what the GPU prefers, which typically is RGBA/BGRA.
 +
 
 +
Similarly, if you read a buffer with {{apifunc|glReadPixels}}, you might get similar problems. There is a {{enum|GL_PACK_ALIGNMENT}} just like the {{enum|GL_UNPACK_ALIGNMENT}}. The default alignment is again 4 which means each horizontal line must be a multiple of 4 in size. If you read the buffer with a format such as {{enum|GL_BGRA}} or {{enum|GL_RGBA}} you won't have any problems since the line will always be a multiple of 4. If you read it in a format such as {{enum|GL_BGR}} or {{enum|GL_RGB}} then you risk running into this problem.
 +
 
 +
The {{enum|GL_PACK/UNPACK_ALIGNMENT}}s can only be 1, 2, 4, or 8. So an alignment of 3 is not allowed. If your intention really is to work with packed RGB/BGR data, you should set the alignment to 1 (or preferably, consider switching to RGBA/BGRA.)
 +
 
 +
== Image precision ==
 +
You ''can'' (but it is not advisable to do so) call {{apifunc|glTexImage2D|(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels)}} and you set X to 1, 2, 3, or 4. The X refers to the number of components ({{enum|GL_RED}} would be 1, {{enum|GL_RG}} would be 2, {{enum|GL_RGB}} would be 3, {{enum|GL_RGBA}} would be 4).
 +
 
 +
It is preferred to actually give a real [[Image Formats|image format]], one with a specific internal precision. If the OpenGL implementation does not support the particular format and precision you choose, the driver will internally convert it into something it does support.
 +
 
 +
OpenGL versions 3.x and above have a set of [[Image Formats#Required formats|required image formats]] that all conforming implementations must implement.
 +
 
 +
{{note|The creation of [[Immutable Storage Texture]]s actively forbids the use of unsized image formats. Or integers as above.}}
 +
 
 +
We should also state that it is common to see the following on tutorial websites:
 +
 
 +
<source lang="cpp">
 +
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
 +
</source>
 +
 
 +
Although GL will accept {{enum|GL_RGB}}, it is up to the driver to decide an appropriate precision. We recommend that you be specific and write {{enum|GL_RGB8}}:
 +
 
 +
<source lang="cpp">
 +
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
 +
</source>
 +
 
 +
This means you want the driver to actually store it in the R8G8B8 format. We should also state that most GPUs will internally convert {{enum|GL_RGB8}} into {{enum|GL_RGBA8}}. So it's probably best to steer clear of {{enum|GL_RGB8}}. We should also state that on some platforms, such as Windows, {{enum|GL_BGRA}} for the [[Pixel Transfer Format|pixel upload format]] is preferred.
 +
 
 +
<source lang="cpp">
 +
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
</source>
 +
 
 +
This uses {{enum|GL_RGBA8}} for the internal format. {{enum|GL_BGRA}} and {{enum|GL_UNSIGNED_BYTE}} (or {{enum|GL_UNSIGNED_INT_8_8_8_8}} is for the data in pixels array. The driver will likely not have to perform any CPU-based conversion and DMA this data directly to the video card. Benchmarking shows that on Windows and with nVidia and ATI/AMD, that this is the optimal format.
 +
 
 +
Preferred pixel transfer formats and types can be [[Query Image Format|queried from the implementation]].
 +
 
 +
== Depth Buffer Precision ==
 +
When you select a pixelformat for your window, and you ask for a [[Depth Buffer]], the depth buffer is typically stored as a [[Normalized Integer]] with a bitdepth of 16, 24, or 32 bits.
 +
 
 +
{{note|You can create images with [[Depth Texture|true floating-point depth formats]]. But these can only be used with [[Framebuffer Object]]s, not the [[Default Framebuffer]].}}
 +
 
 +
In OpenGL, all depth values lie in the range [0, 1]. The integer normalization process simply converts this floating-point range into integer values of the appropriate precision. It is the integer value that is stored in the depth buffer.
 +
 
 +
Typically, 24-bit depth buffers will pad each depth value out to 32-bits, so 8-bits per pixel will go unused. However, if you ask for an 8-bit [[Stencil Buffer]] along with the depth buffer, the two separate images will generally be combined into a single [[Depth Stencil Format|depth/stencil image]]. 24-bits will be used for depth, and the remaining 8-bits for stencil.
 +
 
 +
Now that the misconception about depth buffers being floating point is resolved, what is wrong with this call?
 +
 
 +
<source lang="cpp">
 +
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, mypixels);
 +
</source>
 +
 
 +
Because the depth format is a normalized integer format, the driver will have to use the CPU to convert the normalized integer data into floating-point values. This is slow.
 +
 
 +
The preferred way to handle this is with this code:
  
=== Creating a Texture ===
+
<source lang="cpp">
 +
  if(depth_buffer_precision == 16)
 +
  {
 +
    GLushort mypixels[width*height];
 +
    glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, mypixels);
 +
  }
 +
  else if(depth_buffer_precision == 24)
 +
  {
 +
    GLuint mypixels[width*height];    //There is no 24 bit variable, so we'll have to settle for 32 bit
 +
    glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT_24_8, mypixels);  //No upconversion.
 +
  }
 +
  else if(depth_buffer_precision == 32)
 +
  {
 +
    GLuint mypixels[width*height];
 +
    glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, mypixels);
 +
  }
 +
</source>
 +
 
 +
If you have a depth/stencil format, you can get the depth/stencil data this way:
 +
<source lang="cpp">
 +
  GLuint mypixels[width*height];
 +
  glReadPixels(0, 0, width, height, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, mypixels);
 +
</source>
 +
 
 +
== Creating a complete texture ==
 
What's wrong with this code?
 
What's wrong with this code?
  glGenTextures(1, &textureID);
 
  glBindTexture(GL_TEXTURE_2D, textureID);
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 
  
The texture won't work because it is incomplete. The default GL_TEXTURE_MIN_FILTER state is GL_LINEAR_MIPMAP_NEAREST so GL will consider the texture incomplete as long as you don't create the mipmaps.<br>
+
<source lang="cpp">
This is better because it sets up some of the important texture object states:
+
glGenTextures(1, &textureID);
  glGenTextures(1, &textureID);
+
glBindTexture(GL_TEXTURE_2D, textureID);
  glBindTexture(GL_TEXTURE_2D, textureID);
+
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
+
</source>
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
+
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
+
The texture won't work because it is incomplete. The default {{enum|GL_TEXTURE_MIN_FILTER}} state is {{enum|GL_NEAREST_MIPMAP_LINEAR}}. And because OpenGL defines the default {{enum|GL_TEXTURE_MAX_LEVEL}} to be 1000, OpenGL will expect there to be mipmap levels defined. Since you have only defined a single mipmap level, OpenGL will consider the texture incomplete until the {{enum|GL_TEXTURE_MAX_LEVEL}} is properly set, or the {{enum|GL_TEXTURE_MIN_FILTER}} parameter is set to not use mipmaps.
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);  
+
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
+
Better code would be to use [[Immutable Storage Texture|texture storage functions]] (if you have OpenGL 4.2 or {{extref|texture_storage}}) to allocate the texture's storage, then upload with {{apifunc|glTexSubImage2D}}:
 +
 
 +
<source lang="cpp">
 +
glGenTextures(1, &textureID);
 +
glBindTexture(GL_TEXTURE_2D, textureID);
 +
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, width, height);
 +
glTexSubImage2D(GL_TEXTURE_2D, 0​, 0, 0, width​, height​, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
</source>
 +
 
 +
This creates a texture with a single mipmap level, and sets all of the parameters appropriately. If you wanted to have multiple mipmaps, then you should change the {{code|1}} to the number of mipmaps you want. You will also need separate {{apifunc|glTexSubImage2D}} calls to upload each mipmap.
 +
 
 +
If that is unavailable, you can get a similar effect from this code:
 +
 
 +
<source lang="cpp">
 +
glGenTextures(1, &textureID);
 +
glBindTexture(GL_TEXTURE_2D, textureID);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
 +
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
</source>
 +
 
 +
Again, if you use more than one mipmaps, you should change the {{enum|GL_TEXTURE_MAX_LEVEL}} to state how many you will use (minus 1. The base/max level is a closed range), then perform a {{apifunc|glTexImage2D}} (note the lack of "Sub") for each mipmap.
 +
 
 +
== Automatic mipmap generation ==
 +
Mipmaps of a texture can be automatically generated with the {{apifunc|glGenerateMipmap}} function. OpenGL 3.0 or greater is required for this function (or the extension GL_ARB_framebuffer_object). The function works quite simply; when you call it for a texture, mipmaps are generated for that texture:
 +
 
 +
<source lang="cpp">
 +
glGenTextures(1, &textureID);
 +
glBindTexture(GL_TEXTURE_2D, textureID);
 +
glTexStorage2D(GL_TEXTURE_2D, num_mipmaps, GL_RGBA8, width, height);
 +
glTexSubImage2D(GL_TEXTURE_2D, 0​, 0, 0, width​, height​, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
glGenerateMipmap(GL_TEXTURE_2D);  //Generate num_mipmaps number of mipmaps here.
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 +
</source>
 +
 
 +
If texture storage is not available, you can use the older API:
 +
 
 +
<source lang="cpp">
 +
glGenTextures(1, &textureID);
 +
glBindTexture(GL_TEXTURE_2D, textureID);
 +
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
glGenerateMipmap(GL_TEXTURE_2D);  //Generate mipmaps now!!!
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 +
</source>
 +
{{warning|It has been reported that on some ATI drivers, glGenerateMipmap(GL_TEXTURE_2D) has no effect unless you precede it with a call to glEnable(GL_TEXTURE_2D) in this particular case. Once again, to be clear, bind the texture, glEnable, then glGenerateMipmap. This is a bug and has been in the ATI drivers for a while. Perhaps by the time you read this, it will have been corrected. (glGenerateMipmap doesn't work on ATI as of 2011)}}
 +
 
 +
=== Legacy Generation ===
 +
{{deprecated|section=}}
 +
 
 +
OpenGL 1.4 is required for support for automatic mipmap generation. {{enum|GL_GENERATE_MIPMAP}} is part of the texture object state and it is a flag ({{enum|GL_TRUE}} or {{enum|GL_FALSE}}). If it is set to {{enum|GL_TRUE}}, then whenever texture level 0 is updated, the mipmaps will all be regenerated.
  
If you want mipmaps:
+
<source lang="cpp">
OpenGL 1.4 is required for support for automatic mipmap generation. GL_GENERATE_MIPMAP is part of the texture object now and it is a flag (TRUE or FALSE). Whenever texture level 0 is updated, the mipmaps will all be regenerated.
 
 
   glGenTextures(1, &textureID);
 
   glGenTextures(1, &textureID);
 
   glBindTexture(GL_TEXTURE_2D, textureID);
 
   glBindTexture(GL_TEXTURE_2D, textureID);
Line 121: Line 322:
 
   glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);  
 
   glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);  
 
   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 
   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
</source>
  
When GL_EXT_framebuffer_object is present, instead of using the GL_GENERATE_MIPMAP flag, you can use glGenerateMipmapEXT.<br>
+
In GL 3.0, {{enum|GL_GENERATE_MIPMAP}} is deprecated, and in 3.1 and above, it was removed. So for those versions, you must use {{apifunc|glGenerateMipmap}}.
  glGenTextures(1, &textureID);
 
  glBindTexture(GL_TEXTURE_2D, textureID);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 
  glGenerateMipmapEXT(GL_TEXTURE_2D);  //Generate mipmaps now!!!
 
  
<font color="#FF0000">It has been reported that on some ATI drivers, glGenerateMipmapEXT(GL_TEXTURE_2D) has no effect except if you proceed it with a call to glEnable(GL_TEXTURE_2D) in this particular case. Once again, to be clear, call glTexImage2D, then glEnable, then glGenerateMipmapEXT.</font><br>
+
=== gluBuild2DMipmaps ===
However, for RTT (Render To Texture), you don't need glEnable(GL_TEXTURE_2D).<br>
+
Never use this. Use either {{enum|GL_GENERATE_MIPMAP}} (requires GL 1.4) or the {{apifunc|glGenerateMipmap}} function (requires GL 3.0).
This is considered a bug and has been in the ATI drivers for a while. Perhaps by the time you read this, it has been corrected.<br>
 
On nVidia, the drivers work correctly. They actually generate the mipmaps when you call glGenerateMipmapEXT and no need for glEnable(GL_TEXTURE_2D).<br>
 
In order to not cause problems for your users, we suggest you continue to use GL_GENERATE_MIPMAP for your GL 2.1 program when making a standard texture and use glGenerateMipmapEXT for your RTTs.<br>
 
<br>
 
In GL 3.0, GL_GENERATE_MIPMAP is considered deprecated. You must use glGenerateMipmap.<br>
 
  glGenTextures(1, &textureID);
 
  glBindTexture(GL_TEXTURE_2D, textureID);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 
  glGenerateMipmap(GL_TEXTURE_2D);  //Generate mipmaps now!!!
 
  
If you want to allocate a texture but not initialize texels, the last parameter should be NULL. The "format" and "type" don't matter. What matters is the internal format, which in this example is GL_RGBA8
+
== Checking for OpenGL Errors ==
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
+
Why should you check for OpenGL errors?
  
And in the end, cleanup
+
<source lang="cpp">
  glDeleteTextures(1, &textureID);
+
glGenTextures(1, &textureID);
 +
glBindTexture(GL_TEXTURE_2D, textureID);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 +
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 +
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);  //Requires GL 1.4. Removed from GL 3.1 and above.
 +
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
</source>
  
=== Creating a Texture #2, glTexEnvi ===
+
This code doesn't [[OpenGL Error|check for OpenGL errors]].  If it did, the developer would find that this code throws a {{enum|GL_INVALID_ENUM}}.  The error is raised at {{apifunc|glTexParameter|i(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR)}}.  The magnification filter can't specify the use of mipmaps; only the minification filter can do that.
Since a lot of tutorials call glTexEnvi when they create a texture, quite a few people end up thinking that the texture environment state is part of the texture object.
 
  glGenTextures(1, &textureID);
 
  glBindTexture(GL_TEXTURE_2D, textureID);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 
  glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
 
  
States such as GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T, GL_TEXTURE_MAG_FILTER, GL_TEXTURE_MIN_FILTER are part of the texture object.<br>
+
There are two alternative methods for [[OpenGL Error|detecting and localizing OpenGL Errors]]:
glTexEnv is part of the texture image unit (TIU).<br>
+
# Using [[OpenGL_Error#Catching_errors_.28the_easy_way.29|debug output callbacks]], or
When you set this it will effect any texture attached to the TIU and it only has affect during rendering.<br>
+
# Calling [[OpenGL_Error#Catching_errors_.28the_hard_way.29|{{enum|glGetError}} after every OpenGL function call]] (or group of function calls).
You can select a TIU with glActiveTexture(GL_TEXTURE0+i).<br>
 
Also keep in mind that glTexEnvi has no effect when a fragment shader is bound.<br>
 
  
And in the end, cleanup
+
The former is much simpler.  For details on both, see: [[OpenGL Error]]
  glDeleteTextures(1, &textureID);
 
  
=== gluBuild2DMipmaps ===
+
== Checking For Errors When You Compile Your Shader ==
GLU has become a tradition in OpenGL programming and so you will see it used often in old code. You will see it in new code as well since newcomers use those old tutorials to learn.<br>
 
GL 1.4 introduces the GL_GENERATE_MIPMAP flag (TRUE or FALSE) and by default it is FALSE.<br>
 
For a 2D texture, you could do
 
  glGenTextures(1, &textureID);
 
  glBindTexture(GL_TEXTURE_2D, textureID);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);    //The flag is set to TRUE
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);  //When this is called, the GPU generates all mipmaps
 
  
It also can be used for 3D and cubemap textures. It is highly recommended you use GL_GENERATE_MIPMAP instead of gluBuild2DMipmaps, since gluBuild2DMipmaps is executed on the CPU and is quite slow. You will notice the performance problem when loading a lot of textures.
+
Always check for [[Shader Compile Error|errors when compiling/linking shader or program objects]].
  
Additional:<br>
+
== Creating a Cubemap Texture ==
In GL 3.0, it is recommended to forget about GL_GENERATE_MIPMAP and use glGenerateMipmap
+
It's best to set the wrap mode to {{enum|GL_CLAMP_TO_EDGE}} and not the other formats. Don't forget to define all 6 faces else the texture is considered incomplete. Don't forget to setup {{enum|GL_TEXTURE_WRAP_R}} because cubemaps require 3D texture coordinates.
  glGenTextures(1, &textureID);
 
  glBindTexture(GL_TEXTURE_2D, textureID);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 
  glGenerateMipmap(GL_TEXTURE_2D);  //Generate mipmaps now!
 
  
=== Creating a Cubemap Texture ===
 
It's best to set the wrap mode to GL_CLAMP_TO_EDGE and not the other formats. Don't forget to define all 6 faces else the texture is considered incomplete. Don't forget to setup GL_TEXTURE_WRAP_R because cubemaps require 3D texture coordinates.<br>
 
 
Example:
 
Example:
  glGenTextures(1, &textureID);
 
  glBindTexture(GL_TEXTURE_CUBE_MAP, textureID);
 
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
 
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
 
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
 
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_GENERATE_MIPMAP, GL_TRUE);
 
  //Define all 6 faces
 
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+0, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face0);
 
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+1, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face1);
 
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+2, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face2);
 
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+3, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face3);
 
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+4, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face4);
 
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+5, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face5);
 
  
and the GL 3 method uses glGenerateMipmap, not the GL_GENERATE_MIPMAP flag. GL_GENERATE_MIPMAP is considered deprecated.
+
<source lang="cpp">
  glGenTextures(1, &textureID);
+
glGenTextures(1, &textureID);
  glBindTexture(GL_TEXTURE_CUBE_MAP, textureID);
+
glBindTexture(GL_TEXTURE_CUBE_MAP, textureID);
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
+
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
+
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
+
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
+
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);  
+
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  //Define all 6 faces
+
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_BASE_LEVEL, 0);
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+0, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face0);
+
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAX_LEVEL, 0);  
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+1, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face1);
+
//Define all 6 faces
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+2, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face2);
+
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face0);
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+3, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face3);
+
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face1);
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+4, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face4);
+
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face2);
  glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X+5, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face5);
+
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face3);
   glGenerateMipmap(GL_TEXTURE_CUBE_MAP);  //Generate mipmaps now!!!
+
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face4);
 +
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face5);
 +
</source>
 +
 
 +
When using {{apifunc|glTexStorage2D}} instead of {{apifunc|glTexImage2D}}, you should call {{apifunc|glTexStorage2D}} once with the target {{enum|GL_TEXTURE_CUBE_MAP}}, then make calls to {{apifunc|glTexSubImage2D}} to upload data for each face.
 +
 
 +
If you want to auto-generate mipmaps, you can use any of the aforementioned mechanisms, using the target {{enum|GL_TEXTURE_CUBE_MAP}}. OpenGL will not blend over multiple textures when generating mipmaps for the cubemap leaving visible seams at lower mip levels. Unless you enable [[Seamless Cubemap|seamless cubemap texturing]].
 +
 
 +
== Texture edge color problem ==
 +
{{deprecated|section=}}
 +
 
 +
Never use {{enum|GL_CLAMP}}; what you intended was {{enum|GL_CLAMP_TO_EDGE}}. Indeed, {{enum|GL_CLAMP}} was removed from core GL 3.1+, so it's not even an option anymore.
 +
 
 +
{{note|If you are curious as to what {{enum|GL_CLAMP}} used to mean, it referred to blending texture edge texels with border texels. This is different from {{enum|GL_CLAMP_TO_BORDER}}, where the clamping happens to a solid border color. The {{enum|GL_CLAMP}} behavior was tied to special border texels. Effectively, each texture had a 1-pixel border. This was useful for having more easily seamless texturing, but it was never implemented in hardware directly. So it was removed. }}
 +
 
 +
== Updating a texture ==
 +
To change texels in an already existing 2d texture, use {{apifunc|glTexSubImage2D}}:
 +
 
 +
<source lang="cpp">
 +
glBindTexture(GL_TEXTURE_2D, textureID);   //A texture you have already created storage for
 +
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 +
</source>
 +
 
 +
{{apifunc|glTexImage2D}} creates the storage for the texture, defining the size/format and removing all previous pixel data. {{apifunc|glTexSubImage2D}} only modifies pixel data within the texture. It can be used to update all the texels, or simply a portion of them.
 +
 
 +
To copy texels from the framebuffer, use {{apifunc|glCopyTexSubImage2D}}.
 +
 
 +
glBindTexture(GL_TEXTURE_2D, textureID);    //A texture you have already created storage for
 +
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height);  //Copy current read buffer to texture
 +
 
 +
Note that there is a {{apifunc|glCopyTexImage2D}} function, which does the copy to fill the image, but also defines the image size, format and so forth, just like {{apifunc|glTexImage2D}}.
 +
 
 +
== Render To Texture ==
 +
To render directly to a texture, without doing a copy as above, use [[Framebuffer Objects]].
  
And in the end, cleanup
+
{{warning|NVIDIA's OpenGL driver has a known issue with using incomplete textures. If the texture is not texture complete, the FBO itself will be considered {{enum|GL_FRAMEBUFFER_UNSUPPORTED}}, or will have {{enum|GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT}}. This is a driver bug, as the OpenGL specification does not allow implementations to return either of these values simply because a texture is not yet complete. Until this is resolved in NVIDIA's drivers, it is advised to make sure that all textures have mipmap levels, and that all {{apifunc|glTexParameter|i}} values are properly set up for the format of the texture. For example, integral textures are not complete if the mag and min filters have any LINEAR fields.}}
  glDeleteTextures(1, &textureID);
 
  
=== Texture Border Color Problem ===
+
== Depth Testing Doesn't Work ==
When you have a 2D or 3D or Cubemap texture and you want to clamp the texture coordinates, if you use
 
  glTexParameteri(GL_TEXTURE_X, GL_TEXTURE_WRAP_S, GL_CLAMP);
 
  glTexParameteri(GL_TEXTURE_X, GL_TEXTURE_WRAP_T, GL_CLAMP);
 
  glTexParameteri(GL_TEXTURE_X, GL_TEXTURE_WRAP_R, GL_CLAMP);  //For 3D textures and cubemaps
 
  
then when sampling takes place at the edges of the texture, it will filter with border color so you might see black edges.<br>
+
First, check to see if the [[Depth Test]] is active. Make sure that {{enum|glEnable|(GL_DEPTH_TEST)}} has been called and an appropriate {{apifunc|glDepthFunc}} is active. Also make sure that the {{apifunc|glDepthRange}} matches the depth function.
By default, the border color is black.<br>
 
Instead of GL_CLAMP, use GL_CLAMP_TO_EDGE.
 
  
=== Updating A Texture ===
+
Assuming all of that has been set up correctly, your framebuffer may not have a depth buffer at all. This is easy to see for a [[Framebuffer Object]] you created. For the [[Default Framebuffer]], this depends entirely on how you created your [[OpenGL Context]].
In case you don't want to use Render_To_Texture, you will be just refreshing the texels either from main memory or from the framebuffer.<br>
 
Case 1:<br>
 
Refreshing texels from main memory.
 
  glBindTexture(GL_TEXTURE_2D, textureID);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);  //Texels not initialized since we passed NULL
 
  //---------------------
 
  glBindTexture(GL_TEXTURE_2D, textureID);    //A texture you have already created with glTexImage2D
 
  glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 
  
Notice that glTexSubImage2D was used and not glTexImage2D.<br>
+
For example, if you are using GLUT, you need to make sure you pass {{enum|GLUT_DEPTH}} to the {{code|glutInitDisplayMode}} function.
The difference is that glTexSubImage2D just updates texels and glTexImage2D deletes previous texture and reallocate and sets up texels.<br>
 
glTexImage2D is the slower solution.<br>
 
glTexSubImage2D can be used to update all the texels. Also, make sure that the format you supply is the same stored on the GPU else GL will convert the data format which will lead to performance issues. For example, GL_BGRA is a natively supported format by most GPUs. Consult IHV documenation for formats supported. It's not possible to know from GL what formats are natively supported.
 
  
Case 2:<br>
+
== No Alpha in the Framebuffer ==
Refreshing texels from the framebuffer.
 
  glBindTexture(GL_TEXTURE_2D, textureID);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
 
  glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
 
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);  //Texels not initialized since we passed NULL
 
  //---------------------
 
  RenderObjects();    //Assuming we are rendering to the backbuffer. Do not call SwapBuffers at this point
 
  glBindTexture(GL_TEXTURE_2D, textureID);    //A texture you have already created with glTexImage2D
 
  glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height);  //Copy back buffer to texture
 
  //---------------------
 
  SwapBuffers(hdc);  //Now that we copied result to texture, swap buffers. Back buffer now contains undefined result.
 
  
Just like the case where you should use glTexSubImage2D instead of glTexImage2D, use glCopyTexSubImage2D instead of glCopyTexImage2D.
+
If you are doing [[Blending]] and you need a destination alpha, you need to make sure that your render target has one. This is easy to ensure when rendering to a [[Framebuffer Object]]. But with a [[Default Framebuffer]], it depends on how you created your [[OpenGL Context]].
  
=== Render To Texture ===
+
For example, if you are using GLUT, you need to make sure you pass {{enum|GLUT_ALPHA}} to the {{code|glutInitDisplayMode}} function.
If you want to render_to_texture (RTT) via the GL_EXT_framebuffer_object extension, quite a few people make the same mistake as explained above for the case of "Creating a Texture". They leave glTexParameteri in the default state yet they don't define mipmaps. If you want mipmaps, in this case, once the texture is created (glTexImage2D(....., NULL)), then call glGenerateMipmapEXT(GL_TEXTURE_2D) or glGenerateMipmapEXT(GL_TEXTURE_3D) or glGenerateMipmapEXT(GL_TEXTURE_CUBE_MAP).<br>
 
If you don't, then you get GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT_EXT or GL_FRAMEBUFFER_UNSUPPORTED_EXT.
 
  
For more info and sample code, see
+
== glFinish and glFlush ==
* http://www.opengl.org/wiki/GL_EXT_framebuffer_object
 
* http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
 
  
For GL 3.0, FBO is a core functionality. You can read about it at http://www.opengl.org/wiki/Framebuffer_Objects
+
Use {{apifunc|glFlush}} if you are rendering to the front buffer of the [[Default Framebuffer]]. It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.
  
=== Depth Testing Doesn't Work ===
+
There are a lot of tutorial website that suggest you do this:
You probably did not ask for a depth buffer. If you are using GLUT, glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL)
 
GLUT_DEPTH asks for a depth buffer. Be sure to enable the depth testing with glEnable(GL_DEPTH_TEST) and call glDepthFunc(GL_LEQUAL).
 
  
=== No Alpha in the Framebuffer ===
+
<source lang="cpp">
Be sure you create a double buffered context and make sure you ask for a alpha component. With GLUT, you can call glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL) in which GL_RGBA asks for a alpha component.
+
glFlush();
 +
SwapBuffers();
 +
</source>
  
=== glFinish and glFlush ===
+
This is unnecessary. The SwapBuffer command takes care of flushing and command processing.
Use glFlush if you are rendering directly to your window. It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.<br>
 
  
There is a lot of tutorial website that show this
+
The {{apifunc|glFlush}} and {{apifunc|glFinish}} functions deal with [[Synchronization|synchronizing CPU actions with GPU commands]].
  glFlush();
 
  SwapBuffers();
 
  
Never call glFlush before calling SwapBuffers. The SwapBuffer command takes care of flushing and command processing.
+
In many cases, explicit synchronization like this is unnecessary. The use of [[Sync Object]]s can make it necessary, as can the use of [[Image Load Store|arbitrary reads/writes from/to images]].
  
What does glFlush do? It tells the driver to send all pending commands to the GPU immediatly. This can actually reduce performance.<br>
+
As such, you should only use {{apifunc|glFinish}} when you are doing something that the specification specifically states will not be synchronous.
What does glFinish do? It tells the driver to send all pending commands to the GPU immediatly and waits until all the commands are processed by the GPU. This can take a lot of time.<br>
 
A modern OpenGL program should NEVER use glFlush or/and glFinish.<br>
 
Certain benchmark software might use glFinish.
 
  
=== glDrawPixels ===
+
== glDrawPixels ==
For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support GL_BGRA. Example:
+
{{deprecated|section=}}
  glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
 
  
However, it is recommened that you use a texture instead and just update the texture with glTexSubImage2D.
+
For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support {{enum|GL_BGRA}}. Example:
  
=== glEnableClientState(GL_INDEX_ARRAY) ===
+
<source lang="cpp">
What's wrong with this code?
+
glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
  glBindBuffer(GL_ARRAY_BUFFER, vboid);
+
</source>
  glVertexPointer(3, GL_FLOAT, sizeof(vertex_format), 0);
 
  glTexCoordPointer(2, GL_FLOAT, sizeof(vertex_format), 12);
 
  glNormalPointer(GL_FLOAT, sizeof(vertex_format), 20);
 
  glEnableClientState(GL_VERTEX_ARRAY);
 
  glEnableClientState(GL_TEXTURE_COORD_ARRAY);
 
  glEnableClientState(GL_NORMAL_ARRAY);
 
  glEnableClientState(GL_INDEX_ARRAY);
 
  glBindBuffer(GL_ELEMENT_ARRAY, iboid);
 
  glDrawRangeElements(....);
 
  
The problem is that GL_INDEX_ARRAY is not understood by the programmer.<br>
+
However, it is recommened that you use a texture instead and just update the texture with {{apifunc|glTexSubImage2D}}, possibly with [[Pixel Buffer Object|a buffer object for async transfer]].
GL_INDEX_ARRAY has nothing to do with indices for your glDrawRangeElements.<br>
 
This is for color index arrays. A modern OpenGL program should not used color index arrays. Do not use glIndexPointer. If you need colors, use the color array. This array should be filled be RGBA data.
 
  glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertex_format), X);
 
  glEnableClientState(GL_COLOR_ARRAY);
 
  
=== glInterleavedArrays ===
+
== GL_DOUBLE ==
This function should not be used by modern GL programs. If you want to have interleaved arrays, use the corresponding gl****Pointer calls.<br>
+
{{deprecated|section=}}
Example:
 
  struct MyVertex
 
  {
 
      float x, y, z;      //Vertex
 
      float nx, ny, nz;    //Normal
 
      float s0, t0;        //Texcoord0
 
      float s1, s2;        //Texcoord1
 
  };
 
  //-----------------
 
  glVertexPointer(3, GL_FLOAT, sizeof(MyVertex), offset);
 
  glNormalPointer(GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*3);
 
  glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*6);
 
  glClientActiveTexture(GL_TEXTURE1);
 
  glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*8);
 
  
=== Unsupported formats #1 ===
 
 
glLoadMatrixd, glRotated and any other function that have to do with the double type. Most GPUs don't support GL_DOUBLE (double) so the driver will convert the data to GL_FLOAT (float) and send to the GPU. If you put GL_DOUBLE data in a VBO, the performance might even be much worst than immediate mode (immediate mode means glBegin, glVertex, glEnd). GL doesn't offer any better way to know what the GPU prefers.
 
glLoadMatrixd, glRotated and any other function that have to do with the double type. Most GPUs don't support GL_DOUBLE (double) so the driver will convert the data to GL_FLOAT (float) and send to the GPU. If you put GL_DOUBLE data in a VBO, the performance might even be much worst than immediate mode (immediate mode means glBegin, glVertex, glEnd). GL doesn't offer any better way to know what the GPU prefers.
  
=== Unsupported formats #2 ===
+
== Slow pixel transfer performance ==
  glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(vertex_format), X);
+
 
 +
To achieve good [[Pixel Transfer]] performance, you need to use a pixel transfer format that the implementation can directly work with. Consider this:
  
The problem is that most GPUs can't handle 3 bytes. They prefer multiple of 4. You should add the alpha.<br>
+
{{apifunc|glTexImage2D}}({{enum|GL_TEXTURE_2D}}, 0, {{enum|GL_RGBA8}}, width, height, 0, {{enum|GL_RGBA}}, {{enum|GL_UNSIGNED_BYTE}}, pixels);
The same can be said for glColor3ub and the other "3" component color functions. It's possible that "3" component float is ok for your GPU.
 
You need to consult the IHV's documents or perhaps do benchmarking on your own because GL doesn't offer any better way to know what the GPU prefers.
 
  
=== Unsupported formats #3 ===
+
The problem is that the pixel transfer format {{enum|GL_RGBA}} may not be directly supported for {{enum|GL_RGBA8}} formats. On certain platforms, the GPU prefers that red and blue be swapped ({{enum|GL_BGRA}}).
  glTexImage2D(GL_TEXTURE2D, 0, GL_RGB8, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, pixels);
 
  
Although plenty of image formats like bmp, png, jpg are by default saved as 24 bit and this can save disk space, this is not what the GPU prefers. GPUs prefer multiple of 4 bytes. The driver will convert your data to GL_RGBA8 and it will set alpha to 255. GL doesn't offer any better way to know what the GPU prefers.
+
If you supply {{enum|GL_RGBA}}, then the driver may have to do the swapping for you which is slow. If you do use {{enum|GL_BGRA}}, the call to pixel transfer will be much faster.
  
=== Unsupported formats #4 ===
+
Keep in mind that for the 3rd parameter, it must be kept as {{enum|GL_RGBA8}}. This defines the ''texture's'' [[Image Format|image format]]; the last three parameters describe how your [[Pixel Transfer|pixel data is stored]]. The image format doesn't define the order stored by the texture, so the GPU is still allowed to store it internally as BGRA.
  glTexImage2D(GL_TEXTURE2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
 
  
The above is almost OK. The problem is the GL_RGBA. On certain platforms, the GPU prefers that red and blue be swapped (GL_BGRA).<br>
+
Note that {{enum|GL_BGRA}} pixel transfer format is only preferred when uploading to {{enum|GL_RGBA8}} images. When dealing with other formats, like {{enum|GL_RGBA16}}, {{enum|GL_RGBA8UI}} or even {{enum|GL_RGBA8_SNORM}}, then the regular {{enum|GL_RGBA}} ordering may be preferred.
If you supply GL_RGBA, then the driver will do the swapping for you which is slow.<br>
 
On which platforms? Making a list would be too long but one example is x86+Windows and x64+Windows.
 
  
=== Swap Buffers ===
+
On which platforms is {{enum|GL_BGRA}} preferred? Making a list would be too long but one example is Microsoft Windows. Note that with GL 4.3 or {{extref|internalformat_query2}}, you can simply ask the implementation what is the preferred format with {{apifunc|glGetInternalFormat|iv(GL_TEXTURE_2D, GL_RGBA8, GL_TEXTURE_IMAGE_FORMAT, 1, &preferred_format)}}.
A modern OpenGL program should always use double buffering.<br>
 
A modern OpenGL program should also have a depth buffer and stencil buffer, probably of D24S8 format in order to have fast clears (glClear).<br>
 
Render sequence should be like this<br>
 
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
 
  RenderScene();
 
  SwapBuffers(hdc);  //For Windows
 
  
In some programs, the programmer does not want to rerender the scene since the scene is heavy. He might simply call SwapBuffers (Windows) without clearing the buffer. This is risky since it might give unreliable results between different GPU/driver combination.
+
== Swap Buffers ==
 +
A modern OpenGL program should always use double buffering. A modern 3D OpenGL program should also have a depth buffer.
  
There  are 2 options:<br>
+
Render sequence should be like this:
1. For the PIXELFORMATDESCRIPTOR, you can add PFD_SWAP_COPY to your dwFlags.<br>
 
2. Render to a FBO and blit to the back buffer, then SwapBuffers.<br>
 
See GL_EXT_framebuffer_object and GL_EXT_framebuffer_blit at [http://www.opengl.org/registry www.opengl.org/registry]
 
  
For more info and sample code, see
+
<source lang="cpp">
* http://www.opengl.org/wiki/GL_EXT_framebuffer_object
+
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
* http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
+
RenderScene();
 +
SwapBuffers(hdc);  //For Windows
 +
</source>
  
=== The Pixel Ownership Problem ===
+
The buffers should ''always'' be cleared. On much older hardware, there was a technique to get away without clearing the scene, but on even semi-recent hardware, this will actually make things ''slower''. So always do the clear.
If your windows is covered or if it is partially covered or if window is outside the desktop area, the GPU might not render to those portions.<br>
 
This is explained in the OpenGL specification. It is called undefined behavior since on some platforms/GPU/driver combinations it will work just fine and on others it will not.<br>
 
The solution is to make an offscreen buffer (FBO) and render to the FBO.<br>
 
See GL_EXT_framebuffer_object at [http://www.opengl.org/registry www.opengl.org/registry]
 
  
For more info and sample code, see
+
== The Pixel Ownership Problem ==
* http://www.opengl.org/wiki/GL_EXT_framebuffer_object
 
* http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
 
  
=== glAreTexturesResident and Video Memory ===
+
If your windows is covered or if it is partially covered or if window is outside the desktop area, the GPU might not render to those portions. Reading from those areas may likewise produce garbage data.
glAreTexturesResident doesn't necessarily return the value that you think it should return. On some implementations, it would return always TRUE and on others, it returns TRUE when it's loaded into video memory. A modern OpenGL program should not use this function.<br>
 
If you need to find out how much video memory your video card has, you need to ask the OS. GL doesn't provide a function since GL is intended to be multiplatform and on some systems, there is no such thing as a GPU and video memory.<br>
 
Even if your OS tells you how much VRAM there is, it's difficult for an application to predict what it should do. It is better to offer the user a feature in your program that let's him controls "quality".
 
  
ATI/AMD created GL_ATI_meminfo. This extension is very easy to use. You basically need to call glGetIntegerv with the appropriate token values.<br>
+
This is because those pixels fail the "[[Pixel Ownership Test|pixel ownership test]]". Only pixels that pass this test have valid data. Those that fail have undefined contents.
http://www.opengl.org/registry/specs/ATI/meminfo.txt
+
 
 +
If this is a problem for you (note: it's only a problem if you need to read data back from the covered areas), the solution is to render to a [[Framebuffer Object]] and render to that. If you need to display the image, you can blit to the [[Default Framebuffer]].
 +
 
 +
== Selection and Picking and Feedback Mode ==
 +
{{deprecated|section=}}
 +
 
 +
A modern OpenGL program should not use the selection buffer or feedback mode. These are not 3D graphics rendering features yet they have been added to GL since version 1.0. Selection and feedback runs in software (CPU side). On some implementations, when used along with VBOs, it has been reported that performance is lousy.
  
=== Selection and Picking and Feedback Mode ===
 
A modern OpenGL program should not use the selection buffer or feedback mode. These are not 3D graphics rendering features yet they have been added to GL since version 1.0. Selection and feedback runs in software (CPU side). On some implementations, when used along with VBOs, it has been reported that performance is lousy.<br>
 
 
A modern OpenGL program should do color picking (render each object with some unique color and glReadPixels to find out what object your mouse was on) or do the picking with some 3rd party mathematics library.
 
A modern OpenGL program should do color picking (render each object with some unique color and glReadPixels to find out what object your mouse was on) or do the picking with some 3rd party mathematics library.
  
=== GL_POINTS and GL_LINES ===
+
== Point and line smoothing ==
This will be about the problems related to GL_POINTS and GL_LINES.
+
{{deprecated|section=}}
 +
 
 +
Users notice that on some implementation points or lines are rendered a little different then on others. This is because the GL spec allows some flexibility. Consider this:
 +
 
 +
<source lang="cpp">
 +
glPointSize(5.0);
 +
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
 +
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
 +
glEnable(GL_BLEND);
 +
glEnable(GL_POINT_SMOOTH);
 +
RenderMyPoints();
 +
</source>
 +
 
 +
On some hardware, the points will look nice and round; on others, they will look like squares.
 +
 
 +
On some implementations, when you call glEnable(GL_POINT_SMOOTH) or glEnable(GL_LINE_SMOOTH) and you use shaders at the same time, your rendering speed goes down to 0.1 FPS. This is because the driver does software rendering. This would happen on AMD/ATI GPUs/drivers.
 +
 
 +
== glEnable(GL_POLYGON_SMOOTH) ==
 +
 
 +
This is not a recommended method for anti-aliasing. Use [[Multisampling]] instead.
 +
 
 +
== Color Index, The imaging subset ==
 +
{{deprecated|section=}}
 +
Section 3.6.2 of the GL specification talks about the imaging subset. glColorTable and related operations are part of this subset. They are typically not supported by common GPUs and are software emulated. It is recommended that you avoid it.
  
Users notice that on some implementation points or lines are rendered a little different then on others. This is because the GL spec allows some flexibility. On some implementation, when you call
+
If you find that your texture memory consumption is too high, use [[S3 Texture Compression|texture compression]]. If you really want to use paletted color indexed textures, you can implement this yourself a texture and a [[GLSL|shader]].
  glPointSize(5.0);
 
  glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
 
  glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
 
  glEnable(GL_BLEND);
 
  glEnable(GL_POINT_SMOOTH);
 
  RenderMyPoints();
 
  
the points will look nice and round, on other GPU/drivers it would look like squares.
+
== Bitfield enumerators ==
 +
Some OpenGL enumerators represent bits in a particular bitfield. All of these end in _BIT (before any extension suffix). Take a look at this example:
  
Keep in mind that common gaming GPUs don't support point size larger than 1 pixel. They emulate larger points with quads.
+
<source lang="C">
 +
glEnable(GL_BLEND | GL_DRAW_BUFFER); // invalid
 +
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); // valid
 +
</source>
  
The same applies to GL_LINES. Common gaming GPUs don't support line size larger than 1 pixel. They emulate larger lines with quads.
+
The first line is wrong. Because neither of these enumerators ends in _BIT, they are not bitfields and thus should not be OR'd together.
  
On some implementations, when you call glEnable(GL_POINT_SMOOTH) or glEnable(GL_LINE_SMOOTH) and you use shaders at the same time, your rendering speed goes down to 0.1 FPS. This is because the driver does software rendering. This would happen on AMD/ATI GPUs/drivers.
+
By contrast, the second line is perfectly fine. All of these end in _BIT, so this makes sense.
 +
 
 +
== Triple Buffering ==
 +
You cannot control whether a driver does triple buffering. You could try to implement it yourself using a [[Framebuffer Objects|FBO]]. But if the driver is already doing triple buffering, your code will only turn it into quadruple buffering. Which is usually overkill.
 +
 
 +
== Paletted textures ==
 +
Support for the {{extref|paletted_texture|EXT}} extension has been dropped by the major GL vendors. If you really need paletted textures on new hardware, you may use shaders to achieve that effect.
 +
 
 +
Shader example:
 +
<source lang="glsl">
 +
//Fragment shader
 +
#version 110
 +
uniform sampler2D ColorTable;    //256 x 1 pixels
 +
uniform sampler2D MyIndexTexture;
 +
varying vec2 TexCoord0;
 +
 
 +
void main()
 +
{
 +
  //What color do we want to index?
 +
  vec4 myindex = texture2D(MyIndexTexture, TexCoord0);
 +
  //Do a dependency texture read
 +
  vec4 texel = texture2D(ColorTable, myindex.xy);
 +
  gl_FragColor = texel;  //Output the color
 +
}
 +
</source>
 +
 
 +
{{code|ColorTable}} might be in a format of your choice such as {{enum|GL_RGBA8}}. ColorTable could be a texture of 256 x 1 pixels in size.
 +
 
 +
{{code|MyIndexTexture}} can be in any format, though {{enum|GL_R8}} is quite appropriate ({{enum|GL_R8}} is available in GL 3.0). {{code|MyIndexTexture}} could be of any dimension such as 64 x 32.
 +
 
 +
We read {{code|MyIndexTexture}} and we use this result as a texcoord to read {{code|ColorTable}}. If you wish to perform palette animation, or simply update the colors in the color table, you can submit new values to {{code|ColorTable}} with {{apifunc|glTexSubImage2D}}. Assuming that the color table is in {{enum|GL_RGBA}} format:
 +
 
 +
<source lang="cpp">
 +
glBindTexture(GL_TEXTURE_2D, myColorTableID);
 +
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 256, 1, GL_BGRA, GL_UNSIGNED_BYTE, mypixels);
 +
</source>
 +
 
 +
== Texture Unit ==
 +
{{deprecated|section=}}
 +
When multitexturing was introduced, getting the number of texture units was introduced as well which you can get with:
 +
 
 +
<source lang="cpp">
 +
int MaxTextureUnits;
 +
glGetIntegerv(GL_MAX_TEXTURE_UNITS, &MaxTextureUnits);
 +
</source>
 +
 
 +
You should not use the above because it will give a low number on modern GPUs.
 +
 
 +
In old OpenGL, each texture unit has its own texture environment state (glTexEnv), texture matrix, texture coordinate generation (glTexGen), texcoords (glTexCoord), clamp mode, mipmap mode, texture LOD, anisotropy.
 +
 
 +
Then came the programmable GPU. There aren't texture units anymore. Today, you have texture image units (TIU) which you can get with:
 +
 
 +
<source lang="cpp">
 +
int MaxTextureImageUnits;
 +
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &MaxTextureImageUnits);
 +
</source>
 +
 
 +
A TIU just stores the [[Texture|texture object's]] state, like the clamping, mipmaps, etc. They are independent of texture coordinates. You can use whatever texture coordinate to sample whatever TIU.
 +
 
 +
Note that each shader stage has its own max texture image unit count. {{enum|GL_MAX_TEXTURE_IMAGE_UNITS}} returns the count for [[Fragment Shader|fragment shaders]] only. Each shader has its own maximum number of texture image units. The number of image units across ''all'' shader stages is queried with {{enum|GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS}}; this is the limit of the number of textures that can be bound at any one time. And this is the limit on the image unit to be passed to functions like {{apifunc|glActiveTexture}} and {{apifunc|glBindSampler}}.
 +
 
 +
For most modern hardware, the image unit count will be at least 8 for most stages. Vertex shaders used to be limited to 4 textures on older hardware. All 3.x-capable hardware will return at least 16 for ''each'' stage.
 +
 
 +
In summary, shader-based GL 2.0 and above programs should use {{enum|GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS}} only. The number of texture coordinates should likewise be ignored; use generic vertex attributes instead.
 +
 
 +
== Disable depth test and allow depth writes ==
 +
 
 +
In some cases, you might want to disable depth testing and still allow the depth buffer updated while you are rendering your objects. It turns out that if you disable depth testing ({{apifunc|glDisable|(GL_DEPTH_TEST)}}), GL also disables writes to the depth buffer. The correct solution is to tell GL to ignore the depth test results with {{apifunc|glDepthFunc|(GL_ALWAYS)}}. Be careful because in this state, if you render a far away object last, the depth buffer will contain the values of that far object.
 +
 
 +
== glGetFloatv glGetBooleanv glGetDoublev glGetIntegerv ==
 +
You find that these functions are slow.
 +
 
 +
That's normal. Any function of the glGet form will likely be slow. nVidia and ATI/AMD recommend that you avoid them. The GL driver (and also the GPU) prefer to receive information in the up direction. You can avoid all glGet calls if you track the information yourself.
 +
 
 +
== y-axis ==
 +
Almost everything in OpenGL uses a coordinate system, such that when X goes right, Y goes up. This includes pixel transfer functions and texture coordinates.
  
Keep in mind that the above problems are specific to common gaming GPUs. Workstation GPUs might have GPUs that support real GL_POINTS and real GL_LINES.
+
For example, {{apifunc|glReadPixels}} takes the x and y position. The y-axis is considered from the bottom being 0 and the top being some value. This may seem counter intuitive to some who are used to their OS having the y-axis being inverted (your window's y axis is top to bottom and your mouse's coordinates are y axis top to bottom). The solution is obvious for the mouse: {{code|windowHeight - mouseY}}.
  
=== Color Index, The imaging subset ===
+
For textures, GL considers the y-axis to be bottom to top, the bottom being 0.0 and the top being 1.0. Some people load their bitmap to GL texture and wonder why it appears inverted on their model. The solution is simple: invert your bitmap or invert your model's texcoord by doing 1.0 - v.
Section 3.6.2 of the GL specification talks about the imaging subset. glColorTable and related operations are part of this subset. They are typically not supported by common GPUs and are software emulated. It is recommended that you avoid it. Instead, always use 32 bit textures.<br>
 
If you find that the memory consumption is too high, use DXT1, DXT3 or DXT5 texture compression. See http://www.opengl.org/registry/specs/S3/s3tc.txt for more details. There is also this page on this WIKI http://www.opengl.org/wiki/Textures_-_compression<br>
 
The other method is to do the indexing yourself using a texture and a shader.
 
  
=== Or ===
+
== glGenTextures in render function ==
What's wrong with this code?
+
It seems as if some people create a texture in their render function. Don't create resources in your render function. That goes for all the other {{code|glGen}} function calls as well. Don't read model files and create VBOs with them in your render function. Try to allocate resources at the beginning of your program. Release those resources when your program terminates.
  glPushAttrib(GL_BLEND | GL_DRAW_BUFFER);
 
  
you have to be careful on what you give to glPushAttrib. The documents don't list GL_BLEND and GL_DRAW_BUFFER as valid parameters. glGetError() would return an error code. Also, GL_BLEND and GL_DRAW_BUFFER are not ORable.
+
Worst yet, some create textures (or any other GL object) in their render function and never call {{apifunc|glDeleteTextures}}. Every time their render function gets called, a new texture is created without releasing the old one!
  
What about
+
== Bad znear value ==
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
+
{{deprecated|section=}}
 +
Some users use {{code|gluPerspective}} or {{code|glFrustum}} and pass it a znear value of 0.0. They quickly find that z-buffering doesn't work.
  
are those ORable? Yes, they are.
+
You can't have a znear value of 0.0 or less. If you were to use 0.0, the 3rd row, 4th column of the projection matrix will end up being 0.0. If you use a negative value, you would end up with wrong rendering results on screen.
  
What about
+
Both znear and zfar need to be above 0.0. {{code|gluPerspective}} will not raise a GL error. {{code|glFrustum}} will generate a {{enum|GL_INVALID_VALUE}}.
  glPushAttrib(GL_COLOR_BUFFER_BIT | GL_CURRENT_BIT);
 
  
are those ORable? Yes, they are.
+
As for {{code|glOrtho}}, yes you can use negative values for znear and zfar.
  
What about
+
The [[Vertex Transformation|vertex transformation pipeline]] explains how vertices are transformed.
  glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, GL_RED | GL_GREEN | GL_BLUE | GL_ALPHA, GL_UNSIGNED_BYTE, pixels);
 
  
are those ORable? No. Use GL_RGBA or GL_BGRA instead.<br>
+
== Bad Array Size ==
 +
We are going to give this example with GL 1.1 but the same principle applies if you are using [[Vertex Buffer Object|VBOs]] or any other feature from a future version of OpenGL.
  
=== Triple Buffering ===
+
What's wrong with this code?
This is actually a common question. How can you enable tripple buffering with GL? The answer is that you have no control. Since tripple buffering can be beneficial, some drivers enable it by default. Some drivers offer the possibility to disable it through the control panel of your OS.<br>
+
<source lang="cpp">
Perhaps this one should be moved to the FAQ.
+
GLfloat vertex[] = {0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0};
 +
GLfloat normal[] = {0.0, 0.0, 1.0};
 +
GLfloat color[] = {1.0, 0.7, 1.0, 1.0};
 +
GLushort index[] = {0, 1, 2, 3};
 +
glVertexPointer(3, GL_FLOAT, sizeof(GLfloat)*3, vertex);
 +
glNormalPointer(GL_FLOAT, sizeof(GLfloat)*3, normal);
 +
glColorPointer(4, GL_FLOAT, sizeof(GLfloat)*4, color);
 +
glDrawElements(GL_QUADS, 4, GL_UNSIGNED_SHORT, index);
 +
</source>
  
=== Palette ===
+
The intent is to render a single quad, but your array sizes don't match up. You have only 1 normal for your quad while GL wants 1 normal per vertex. You have one RGBA color for your quad while GL wants one color per vertex. You risk crashing your system because the GL driver will be reading from beyond the size of your supplied normal and color array.
This should probably go into the FAQ.<br>
 
There is an extension called GL_EXT_paletted_texture http://www.opengl.org/registry/specs/EXT/paletted_texture.txt<br>
 
which exposes glColorTableEXT and a few other functions that are for making a color table and then you can make a texture full of indices that are used to reference this table.<br>
 
<b>Support for this extension has been dropped a long time ago. nVidia and ATI/AMD don't support it.</b><br>
 
Usually people who need palette support are people rewritting very old games with OpenGL.<br>
 
One solution is to use shaders like this.
 
  //Fragment shader
 
  uniform sampler2D ColorTable;
 
  uniform sampler2D MyIndexTexture;
 
  varying vec2 TexCoord0;
 
  void main()
 
  {
 
    //What color do we want to index?
 
    vec4 myindex = texture2D(MyIndexTexture, TexCoord0);
 
    //Do a dependency texture read
 
    vec4 texel = texture2D(ColorTable, myindex.xy);
 
    gl_FragColor = texel;  //Output the color
 
  }
 
  
ColorTable might be in a format of your choice such as GL_RGBA8.<br>
+
This issue is also explained in the [[FAQ#Multi_indexed_rendering|FAQ]].
ColorTable could be a 256 x 1 texture.<br>
 
MyIndexTexture can be in a format such as GL_LUMINANCE8.<br>
 
MyIndexTexture could be of any dimension such as 64 x 32.<br>
 
We read MyIndexTexture and we use this result as a texcoord to read ColorTable. This is called a dependency texture read operation.<br>
 
If you want to animate the texture, you submit new values to ColorTable with glTexSubImage2D.
 
  glBindTexture(GL_TEXTURE_2D, myColorTableID);
 
  glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 256, 1, GL_BGRA, GL_UNSIGNED_BYTE, mypixels);
 

Latest revision as of 16:41, 12 June 2018

Quite a few websites show the same mistakes and the mistakes presented in their tutorials are copied and pasted by those who want to learn OpenGL. This page has been created so that newcomers understand GL programming a little better instead of working by trial and error.

There are also other articles explaining common mistakes:

Extensions and OpenGL Versions

One of the possible mistakes related to this is to check for the presence of an extension, but instead using the corresponding core functions. The correct behavior is to check for the presence of the extension if you want to use the extension API, and check the GL version if you want to use the core API. In case of a core extension, you should check for both the version and the presence of the extension; if either is there, you can use the functionality.

The Object Oriented Language Problem

In an object-oriented language like C++, it is often useful to have a class that wraps an OpenGL Object. For example, one might have a texture object that has a constructor and a destructor like the following:

MyTexture::MyTexture(const char *pfilePath)
{
  if(LoadFile(pfilePath)==ERROR)
	 return;
  textureID=0;
  glGenTextures(1, &textureID);
  //More GL code...
}

MyTexture::~MyTexture()
{
  if(textureID)
	 glDeleteTextures(1, &textureID);
}

There is an issue with doing this. OpenGL functions do not work unless an OpenGL Context has been created and is active within that thread. Thus, glGenTextures will not work correctly before context creation, and glDeleteTextures will not work correctly after context destruction.

This problem usually manifests itself with constructors, when a user creates a texture object or similar OpenGL object wrapper at global scope. There are several potential solutions:

  1. Do not use constructors/destructors to initialize/destroy OpenGL objects. Instead, use member functions of these classes for these purposes. This violates RAII principles, so this is not the best course of action.
  2. Have your OpenGL object constructors throw an exception if a context has not been created yet. This requires an addition to your context creation functionality that tells your code when a context has been created and is active.
  3. Create a class that owns all other OpenGL related objects. This class should also be responsible for creating the context in its constructor.
  4. Allow your program to crash if objects are created/destroyed when a context is not current. This puts the onus on the user to correctly use them, but it also makes their working code seem more natural.

RAII and hidden destructor calls

The C++ principle of RAII says that if an object encapsulates a resource (like an OpenGL Object), the constructor should create the resource and the destructor should destroy it. This seems good:

//Do OpenGL context creation.
{
  MyTexture tex;

  RunGameLoop(&tex); //Use the texture in several iterations.
} //Destructor for `tex` happens here.

The problem happens when you want to pass it around, or are creating it within a C++ container like vector. Consider this function:

MyTexture CreateTexture()
{
  MyTexture tex;

  //Initialize `tex` with data.

  return tex;
}

What happens here? By the rules of C++, tex will be destroyed at the conclusion of this function call. What is returned is not tex itself, but a copy of this object. But tex managed a resource: an OpenGL object. And that resource will be destroyed by the destructor.

The copy that gets returned will therefore have an OpenGL object name that has been destroyed.

This happens because we violated C++'s rule of 3/5: if you write for a class one of a destructor, copy/move constructor, or copy/move assignment operator, then you must write all of them.

The compiler-generated copy constructor is wrong; it copies the OpenGL object name, not the OpenGL object itself. This leaves two C++ objects which each intend to destroy the same OpenGL object.

Ideally, copying a RAII wrapper should cause a copy of the OpenGL object's data into a new OpenGL object. This would leave each C++ object with its own unique OpenGL object. However, copying an OpenGL object's data to a new object is incredibly expensive; it is also essentially impossible to do, thanks to the ability of extensions to add state that you might not statically know about.

So instead, we should forbid copying of OpenGL wrapper objects. Such types should be move-only types; on move, we steal the resource from the moved-from object.

class MyTexture
{
private:
  GLuint obj_ = 0; //Cannot leave this uninitialized.

  void Release()
  {
    glDeleteTextures(1, &obj_);
    obj_ = 0;
  }

public:
  //Other constructors as normal.

  //Free up the texture.
  ~MyTexture() {Release();}

  //Delete the copy constructor/assignment.
  MyTexture(const MyTexture &) = delete;
  MyTexture &operator=(const MyTexture &) = delete;

  MyTexture(MyTexture &&other) : obj_(other.obj_)
  {
    other.obj_ = 0; //Use the "null" texture for the old object.
  }

  MyTexture &operator=(MyTexture &&other)
  {
    //ALWAYS check for self-assignment.
    if(this != &other)
    {
      Release();
      //obj_ is now 0.
      std::swap(obj_, other.obj_);
    }
  }
};

Now, the above code can work. return tex; will provoke a move from tex, which will leave tex.obj_ as zero right before it is destroyed. And it's OK to call glDeleteTextures with a 0 texture.

OOP and hidden binding

There's another issue when using OpenGL with languages like c++. Consider the following function:

void MyTexture::TexParameter(GLenum pname, GLint param)
{
    glBindTexture(GL_TEXTURE_2D, textureID);
    glTexParameteri(GL_TEXTURE_2D, pname, param);
}

The problem is that the binding of the texture is hidden from the user of the class. There may be performance implications for doing repeated binding of objects (especially since the API may not seem heavyweight to the outside user). But the major concern is correctness; the bound objects are global state, which a local member function now has changed.

This can cause many sources of hidden breakage. The safe way to implement this is as follows:

void MyTexture::TexParameter(GLenum pname, GLint param)
{
    GLuint boundTexture = 0;
    glGetIntegerv(GL_TEXTURE_BINDING_2D, (GLint*) &boundTexture);
    glBindTexture(GL_TEXTURE_2D, textureID);
    glTexParameteri(GL_TEXTURE_2D, pname, param);
    glBindTexture(GL_TEXTURE_2D, boundTexture);
}

Note that this solution emphasizes correctness over performance; the glGetIntegerv call may not be particularly fast.

A more effective solution is to use Direct State Access, which requires OpenGL 4.5 or ARB_direct_state_access, or the older EXT_direct_state_access extension:

void MyTexture::TexParameter(GLenum pname, GLint param)
{
    glTextureParameteri(textureID, GL_TEXTURE_2D, pname, param);
}

Texture upload and pixel reads

You create storage for a Texture and upload pixels to it with glTexImage2D (or similar functions, as appropriate to the type of texture). If your program crashes during the upload, or diagonal lines appear in the resulting image, this is because the alignment of each horizontal line of your pixel array is not multiple of 4. This typically happens to users loading an image that is of the RGB or BGR format (for example, 24 BPP images), depending on the source of your image data.

Example, your image width = 401 and height = 500. The height is irrelevant; what matters is the width. If we do the math, 401 pixels x 3 bytes = 1203, which is not divisible by 4. Some image file formats may inherently align each row to 4 bytes, but some do not. For those that don't, each row will start exactly 1203 bytes from the start of the last. OpenGL's row alignment can be changed to fit the row alignment for your image data. This is done by calling glPixelStorei(GL_UNPACK_ALIGNMENT, #), where # is the alignment you want. The default alignment is 4.

And if you are interested, most GPUs like chunks of 4 bytes. In other words, GL_RGBA or GL_BGRA is preferred when each component is a byte. GL_RGB and GL_BGR is considered bizarre since most GPUs, most CPUs and any other kind of chip don't handle 24 bits. This means, the driver converts your GL_RGB or GL_BGR to what the GPU prefers, which typically is RGBA/BGRA.

Similarly, if you read a buffer with glReadPixels, you might get similar problems. There is a GL_PACK_ALIGNMENT just like the GL_UNPACK_ALIGNMENT. The default alignment is again 4 which means each horizontal line must be a multiple of 4 in size. If you read the buffer with a format such as GL_BGRA or GL_RGBA you won't have any problems since the line will always be a multiple of 4. If you read it in a format such as GL_BGR or GL_RGB then you risk running into this problem.

The GL_PACK/UNPACK_ALIGNMENTs can only be 1, 2, 4, or 8. So an alignment of 3 is not allowed. If your intention really is to work with packed RGB/BGR data, you should set the alignment to 1 (or preferably, consider switching to RGBA/BGRA.)

Image precision

You can (but it is not advisable to do so) call glTexImage2D(GL_TEXTURE_2D, 0, X, width, height, 0, format, type, pixels) and you set X to 1, 2, 3, or 4. The X refers to the number of components (GL_RED would be 1, GL_RG would be 2, GL_RGB would be 3, GL_RGBA would be 4).

It is preferred to actually give a real image format, one with a specific internal precision. If the OpenGL implementation does not support the particular format and precision you choose, the driver will internally convert it into something it does support.

OpenGL versions 3.x and above have a set of required image formats that all conforming implementations must implement.

Note: The creation of Immutable Storage Textures actively forbids the use of unsized image formats. Or integers as above.

We should also state that it is common to see the following on tutorial websites:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);

Although GL will accept GL_RGB, it is up to the driver to decide an appropriate precision. We recommend that you be specific and write GL_RGB8:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);

This means you want the driver to actually store it in the R8G8B8 format. We should also state that most GPUs will internally convert GL_RGB8 into GL_RGBA8. So it's probably best to steer clear of GL_RGB8. We should also state that on some platforms, such as Windows, GL_BGRA for the pixel upload format is preferred.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

This uses GL_RGBA8 for the internal format. GL_BGRA and GL_UNSIGNED_BYTE (or GL_UNSIGNED_INT_8_8_8_8 is for the data in pixels array. The driver will likely not have to perform any CPU-based conversion and DMA this data directly to the video card. Benchmarking shows that on Windows and with nVidia and ATI/AMD, that this is the optimal format.

Preferred pixel transfer formats and types can be queried from the implementation.

Depth Buffer Precision

When you select a pixelformat for your window, and you ask for a Depth Buffer, the depth buffer is typically stored as a Normalized Integer with a bitdepth of 16, 24, or 32 bits.

Note: You can create images with true floating-point depth formats. But these can only be used with Framebuffer Objects, not the Default Framebuffer.

In OpenGL, all depth values lie in the range [0, 1]. The integer normalization process simply converts this floating-point range into integer values of the appropriate precision. It is the integer value that is stored in the depth buffer.

Typically, 24-bit depth buffers will pad each depth value out to 32-bits, so 8-bits per pixel will go unused. However, if you ask for an 8-bit Stencil Buffer along with the depth buffer, the two separate images will generally be combined into a single depth/stencil image. 24-bits will be used for depth, and the remaining 8-bits for stencil.

Now that the misconception about depth buffers being floating point is resolved, what is wrong with this call?

glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, mypixels);

Because the depth format is a normalized integer format, the driver will have to use the CPU to convert the normalized integer data into floating-point values. This is slow.

The preferred way to handle this is with this code:

  if(depth_buffer_precision == 16)
  {
    GLushort mypixels[width*height];
    glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, mypixels);
  }
  else if(depth_buffer_precision == 24)
  {
    GLuint mypixels[width*height];    //There is no 24 bit variable, so we'll have to settle for 32 bit
    glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT_24_8, mypixels);  //No upconversion.
  }
  else if(depth_buffer_precision == 32)
  {
    GLuint mypixels[width*height];
    glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, mypixels);
  }

If you have a depth/stencil format, you can get the depth/stencil data this way:

   GLuint mypixels[width*height];
   glReadPixels(0, 0, width, height, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, mypixels);

Creating a complete texture

What's wrong with this code?

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

The texture won't work because it is incomplete. The default GL_TEXTURE_MIN_FILTER state is GL_NEAREST_MIPMAP_LINEAR. And because OpenGL defines the default GL_TEXTURE_MAX_LEVEL to be 1000, OpenGL will expect there to be mipmap levels defined. Since you have only defined a single mipmap level, OpenGL will consider the texture incomplete until the GL_TEXTURE_MAX_LEVEL is properly set, or the GL_TEXTURE_MIN_FILTER parameter is set to not use mipmaps.

Better code would be to use texture storage functions (if you have OpenGL 4.2 or ARB_texture_storage) to allocate the texture's storage, then upload with glTexSubImage2D:

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0​, 0, 0, width​, height​, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

This creates a texture with a single mipmap level, and sets all of the parameters appropriately. If you wanted to have multiple mipmaps, then you should change the 1 to the number of mipmaps you want. You will also need separate glTexSubImage2D calls to upload each mipmap.

If that is unavailable, you can get a similar effect from this code:

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

Again, if you use more than one mipmaps, you should change the GL_TEXTURE_MAX_LEVEL to state how many you will use (minus 1. The base/max level is a closed range), then perform a glTexImage2D (note the lack of "Sub") for each mipmap.

Automatic mipmap generation

Mipmaps of a texture can be automatically generated with the glGenerateMipmap function. OpenGL 3.0 or greater is required for this function (or the extension GL_ARB_framebuffer_object). The function works quite simply; when you call it for a texture, mipmaps are generated for that texture:

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexStorage2D(GL_TEXTURE_2D, num_mipmaps, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0​, 0, 0, width​, height​, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
glGenerateMipmap(GL_TEXTURE_2D);  //Generate num_mipmaps number of mipmaps here.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);

If texture storage is not available, you can use the older API:

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
glGenerateMipmap(GL_TEXTURE_2D);  //Generate mipmaps now!!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
Warning: It has been reported that on some ATI drivers, glGenerateMipmap(GL_TEXTURE_2D) has no effect unless you precede it with a call to glEnable(GL_TEXTURE_2D) in this particular case. Once again, to be clear, bind the texture, glEnable, then glGenerateMipmap. This is a bug and has been in the ATI drivers for a while. Perhaps by the time you read this, it will have been corrected. (glGenerateMipmap doesn't work on ATI as of 2011)

Legacy Generation

OpenGL 1.4 is required for support for automatic mipmap generation. GL_GENERATE_MIPMAP is part of the texture object state and it is a flag (GL_TRUE or GL_FALSE). If it is set to GL_TRUE, then whenever texture level 0 is updated, the mipmaps will all be regenerated.

   glGenTextures(1, &textureID);
   glBindTexture(GL_TEXTURE_2D, textureID);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 
   glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); 
   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

In GL 3.0, GL_GENERATE_MIPMAP is deprecated, and in 3.1 and above, it was removed. So for those versions, you must use glGenerateMipmap.

gluBuild2DMipmaps

Never use this. Use either GL_GENERATE_MIPMAP (requires GL 1.4) or the glGenerateMipmap function (requires GL 3.0).

Checking for OpenGL Errors

Why should you check for OpenGL errors?

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);   //Requires GL 1.4. Removed from GL 3.1 and above.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

This code doesn't check for OpenGL errors. If it did, the developer would find that this code throws a GL_INVALID_ENUM. The error is raised at glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR). The magnification filter can't specify the use of mipmaps; only the minification filter can do that.

There are two alternative methods for detecting and localizing OpenGL Errors:

  1. Using debug output callbacks, or
  2. Calling glGetError after every OpenGL function call (or group of function calls).

The former is much simpler. For details on both, see: OpenGL Error

Checking For Errors When You Compile Your Shader

Always check for errors when compiling/linking shader or program objects.

Creating a Cubemap Texture

It's best to set the wrap mode to GL_CLAMP_TO_EDGE and not the other formats. Don't forget to define all 6 faces else the texture is considered incomplete. Don't forget to setup GL_TEXTURE_WRAP_R because cubemaps require 3D texture coordinates.

Example:

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_CUBE_MAP, textureID);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_BASE_LEVEL, 0); 
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAX_LEVEL, 0); 
//Define all 6 faces
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face0);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face1);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face2);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face3);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face4);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face5);

When using glTexStorage2D instead of glTexImage2D, you should call glTexStorage2D once with the target GL_TEXTURE_CUBE_MAP, then make calls to glTexSubImage2D to upload data for each face.

If you want to auto-generate mipmaps, you can use any of the aforementioned mechanisms, using the target GL_TEXTURE_CUBE_MAP. OpenGL will not blend over multiple textures when generating mipmaps for the cubemap leaving visible seams at lower mip levels. Unless you enable seamless cubemap texturing.

Texture edge color problem

Never use GL_CLAMP; what you intended was GL_CLAMP_TO_EDGE. Indeed, GL_CLAMP was removed from core GL 3.1+, so it's not even an option anymore.

Note: If you are curious as to what GL_CLAMP used to mean, it referred to blending texture edge texels with border texels. This is different from GL_CLAMP_TO_BORDER, where the clamping happens to a solid border color. The GL_CLAMP behavior was tied to special border texels. Effectively, each texture had a 1-pixel border. This was useful for having more easily seamless texturing, but it was never implemented in hardware directly. So it was removed.

Updating a texture

To change texels in an already existing 2d texture, use glTexSubImage2D:

glBindTexture(GL_TEXTURE_2D, textureID);    //A texture you have already created storage for
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

glTexImage2D creates the storage for the texture, defining the size/format and removing all previous pixel data. glTexSubImage2D only modifies pixel data within the texture. It can be used to update all the texels, or simply a portion of them.

To copy texels from the framebuffer, use glCopyTexSubImage2D.

glBindTexture(GL_TEXTURE_2D, textureID); //A texture you have already created storage for glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height); //Copy current read buffer to texture

Note that there is a glCopyTexImage2D function, which does the copy to fill the image, but also defines the image size, format and so forth, just like glTexImage2D.

Render To Texture

To render directly to a texture, without doing a copy as above, use Framebuffer Objects.

Warning: NVIDIA's OpenGL driver has a known issue with using incomplete textures. If the texture is not texture complete, the FBO itself will be considered GL_FRAMEBUFFER_UNSUPPORTED, or will have GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. This is a driver bug, as the OpenGL specification does not allow implementations to return either of these values simply because a texture is not yet complete. Until this is resolved in NVIDIA's drivers, it is advised to make sure that all textures have mipmap levels, and that all glTexParameteri values are properly set up for the format of the texture. For example, integral textures are not complete if the mag and min filters have any LINEAR fields.

Depth Testing Doesn't Work

First, check to see if the Depth Test is active. Make sure that glEnable has been called and an appropriate glDepthFunc is active. Also make sure that the glDepthRange matches the depth function.

Assuming all of that has been set up correctly, your framebuffer may not have a depth buffer at all. This is easy to see for a Framebuffer Object you created. For the Default Framebuffer, this depends entirely on how you created your OpenGL Context.

For example, if you are using GLUT, you need to make sure you pass GLUT_DEPTH to the glutInitDisplayMode function.

No Alpha in the Framebuffer

If you are doing Blending and you need a destination alpha, you need to make sure that your render target has one. This is easy to ensure when rendering to a Framebuffer Object. But with a Default Framebuffer, it depends on how you created your OpenGL Context.

For example, if you are using GLUT, you need to make sure you pass GLUT_ALPHA to the glutInitDisplayMode function.

glFinish and glFlush

Use glFlush if you are rendering to the front buffer of the Default Framebuffer. It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.

There are a lot of tutorial website that suggest you do this:

glFlush();
SwapBuffers();

This is unnecessary. The SwapBuffer command takes care of flushing and command processing.

The glFlush and glFinish functions deal with synchronizing CPU actions with GPU commands.

In many cases, explicit synchronization like this is unnecessary. The use of Sync Objects can make it necessary, as can the use of arbitrary reads/writes from/to images.

As such, you should only use glFinish when you are doing something that the specification specifically states will not be synchronous.

glDrawPixels

For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support GL_BGRA. Example:

glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

However, it is recommened that you use a texture instead and just update the texture with glTexSubImage2D, possibly with a buffer object for async transfer.

GL_DOUBLE

glLoadMatrixd, glRotated and any other function that have to do with the double type. Most GPUs don't support GL_DOUBLE (double) so the driver will convert the data to GL_FLOAT (float) and send to the GPU. If you put GL_DOUBLE data in a VBO, the performance might even be much worst than immediate mode (immediate mode means glBegin, glVertex, glEnd). GL doesn't offer any better way to know what the GPU prefers.

Slow pixel transfer performance

To achieve good Pixel Transfer performance, you need to use a pixel transfer format that the implementation can directly work with. Consider this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);

The problem is that the pixel transfer format GL_RGBA may not be directly supported for GL_RGBA8 formats. On certain platforms, the GPU prefers that red and blue be swapped (GL_BGRA).

If you supply GL_RGBA, then the driver may have to do the swapping for you which is slow. If you do use GL_BGRA, the call to pixel transfer will be much faster.

Keep in mind that for the 3rd parameter, it must be kept as GL_RGBA8. This defines the texture's image format; the last three parameters describe how your pixel data is stored. The image format doesn't define the order stored by the texture, so the GPU is still allowed to store it internally as BGRA.

Note that GL_BGRA pixel transfer format is only preferred when uploading to GL_RGBA8 images. When dealing with other formats, like GL_RGBA16, GL_RGBA8UI or even GL_RGBA8_SNORM, then the regular GL_RGBA ordering may be preferred.

On which platforms is GL_BGRA preferred? Making a list would be too long but one example is Microsoft Windows. Note that with GL 4.3 or ARB_internalformat_query2, you can simply ask the implementation what is the preferred format with glGetInternalFormativ(GL_TEXTURE_2D, GL_RGBA8, GL_TEXTURE_IMAGE_FORMAT, 1, &preferred_format).

Swap Buffers

A modern OpenGL program should always use double buffering. A modern 3D OpenGL program should also have a depth buffer.

Render sequence should be like this:

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
RenderScene();
SwapBuffers(hdc);  //For Windows

The buffers should always be cleared. On much older hardware, there was a technique to get away without clearing the scene, but on even semi-recent hardware, this will actually make things slower. So always do the clear.

The Pixel Ownership Problem

If your windows is covered or if it is partially covered or if window is outside the desktop area, the GPU might not render to those portions. Reading from those areas may likewise produce garbage data.

This is because those pixels fail the "pixel ownership test". Only pixels that pass this test have valid data. Those that fail have undefined contents.

If this is a problem for you (note: it's only a problem if you need to read data back from the covered areas), the solution is to render to a Framebuffer Object and render to that. If you need to display the image, you can blit to the Default Framebuffer.

Selection and Picking and Feedback Mode

A modern OpenGL program should not use the selection buffer or feedback mode. These are not 3D graphics rendering features yet they have been added to GL since version 1.0. Selection and feedback runs in software (CPU side). On some implementations, when used along with VBOs, it has been reported that performance is lousy.

A modern OpenGL program should do color picking (render each object with some unique color and glReadPixels to find out what object your mouse was on) or do the picking with some 3rd party mathematics library.

Point and line smoothing

Users notice that on some implementation points or lines are rendered a little different then on others. This is because the GL spec allows some flexibility. Consider this:

glPointSize(5.0);
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_POINT_SMOOTH);
RenderMyPoints();

On some hardware, the points will look nice and round; on others, they will look like squares.

On some implementations, when you call glEnable(GL_POINT_SMOOTH) or glEnable(GL_LINE_SMOOTH) and you use shaders at the same time, your rendering speed goes down to 0.1 FPS. This is because the driver does software rendering. This would happen on AMD/ATI GPUs/drivers.

glEnable(GL_POLYGON_SMOOTH)

This is not a recommended method for anti-aliasing. Use Multisampling instead.

Color Index, The imaging subset

Section 3.6.2 of the GL specification talks about the imaging subset. glColorTable and related operations are part of this subset. They are typically not supported by common GPUs and are software emulated. It is recommended that you avoid it.

If you find that your texture memory consumption is too high, use texture compression. If you really want to use paletted color indexed textures, you can implement this yourself a texture and a shader.

Bitfield enumerators

Some OpenGL enumerators represent bits in a particular bitfield. All of these end in _BIT (before any extension suffix). Take a look at this example:

glEnable(GL_BLEND | GL_DRAW_BUFFER); // invalid
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); // valid

The first line is wrong. Because neither of these enumerators ends in _BIT, they are not bitfields and thus should not be OR'd together.

By contrast, the second line is perfectly fine. All of these end in _BIT, so this makes sense.

Triple Buffering

You cannot control whether a driver does triple buffering. You could try to implement it yourself using a FBO. But if the driver is already doing triple buffering, your code will only turn it into quadruple buffering. Which is usually overkill.

Paletted textures

Support for the EXT_paletted_texture extension has been dropped by the major GL vendors. If you really need paletted textures on new hardware, you may use shaders to achieve that effect.

Shader example:

//Fragment shader
#version 110
uniform sampler2D ColorTable;     //256 x 1 pixels
uniform sampler2D MyIndexTexture;
varying vec2 TexCoord0;

void main()
{
  //What color do we want to index?
  vec4 myindex = texture2D(MyIndexTexture, TexCoord0);
  //Do a dependency texture read
  vec4 texel = texture2D(ColorTable, myindex.xy);
  gl_FragColor = texel;   //Output the color
}

ColorTable might be in a format of your choice such as GL_RGBA8. ColorTable could be a texture of 256 x 1 pixels in size.

MyIndexTexture can be in any format, though GL_R8 is quite appropriate (GL_R8 is available in GL 3.0). MyIndexTexture could be of any dimension such as 64 x 32.

We read MyIndexTexture and we use this result as a texcoord to read ColorTable. If you wish to perform palette animation, or simply update the colors in the color table, you can submit new values to ColorTable with glTexSubImage2D. Assuming that the color table is in GL_RGBA format:

glBindTexture(GL_TEXTURE_2D, myColorTableID);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 256, 1, GL_BGRA, GL_UNSIGNED_BYTE, mypixels);

Texture Unit

When multitexturing was introduced, getting the number of texture units was introduced as well which you can get with:

int MaxTextureUnits;
glGetIntegerv(GL_MAX_TEXTURE_UNITS, &MaxTextureUnits);

You should not use the above because it will give a low number on modern GPUs.

In old OpenGL, each texture unit has its own texture environment state (glTexEnv), texture matrix, texture coordinate generation (glTexGen), texcoords (glTexCoord), clamp mode, mipmap mode, texture LOD, anisotropy.

Then came the programmable GPU. There aren't texture units anymore. Today, you have texture image units (TIU) which you can get with:

int MaxTextureImageUnits;
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &MaxTextureImageUnits);

A TIU just stores the texture object's state, like the clamping, mipmaps, etc. They are independent of texture coordinates. You can use whatever texture coordinate to sample whatever TIU.

Note that each shader stage has its own max texture image unit count. GL_MAX_TEXTURE_IMAGE_UNITS returns the count for fragment shaders only. Each shader has its own maximum number of texture image units. The number of image units across all shader stages is queried with GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS; this is the limit of the number of textures that can be bound at any one time. And this is the limit on the image unit to be passed to functions like glActiveTexture and glBindSampler.

For most modern hardware, the image unit count will be at least 8 for most stages. Vertex shaders used to be limited to 4 textures on older hardware. All 3.x-capable hardware will return at least 16 for each stage.

In summary, shader-based GL 2.0 and above programs should use GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS only. The number of texture coordinates should likewise be ignored; use generic vertex attributes instead.

Disable depth test and allow depth writes

In some cases, you might want to disable depth testing and still allow the depth buffer updated while you are rendering your objects. It turns out that if you disable depth testing (glDisable(GL_DEPTH_TEST)), GL also disables writes to the depth buffer. The correct solution is to tell GL to ignore the depth test results with glDepthFunc(GL_ALWAYS). Be careful because in this state, if you render a far away object last, the depth buffer will contain the values of that far object.

glGetFloatv glGetBooleanv glGetDoublev glGetIntegerv

You find that these functions are slow.

That's normal. Any function of the glGet form will likely be slow. nVidia and ATI/AMD recommend that you avoid them. The GL driver (and also the GPU) prefer to receive information in the up direction. You can avoid all glGet calls if you track the information yourself.

y-axis

Almost everything in OpenGL uses a coordinate system, such that when X goes right, Y goes up. This includes pixel transfer functions and texture coordinates.

For example, glReadPixels takes the x and y position. The y-axis is considered from the bottom being 0 and the top being some value. This may seem counter intuitive to some who are used to their OS having the y-axis being inverted (your window's y axis is top to bottom and your mouse's coordinates are y axis top to bottom). The solution is obvious for the mouse: windowHeight - mouseY.

For textures, GL considers the y-axis to be bottom to top, the bottom being 0.0 and the top being 1.0. Some people load their bitmap to GL texture and wonder why it appears inverted on their model. The solution is simple: invert your bitmap or invert your model's texcoord by doing 1.0 - v.

glGenTextures in render function

It seems as if some people create a texture in their render function. Don't create resources in your render function. That goes for all the other glGen function calls as well. Don't read model files and create VBOs with them in your render function. Try to allocate resources at the beginning of your program. Release those resources when your program terminates.

Worst yet, some create textures (or any other GL object) in their render function and never call glDeleteTextures. Every time their render function gets called, a new texture is created without releasing the old one!

Bad znear value

Some users use gluPerspective or glFrustum and pass it a znear value of 0.0. They quickly find that z-buffering doesn't work.

You can't have a znear value of 0.0 or less. If you were to use 0.0, the 3rd row, 4th column of the projection matrix will end up being 0.0. If you use a negative value, you would end up with wrong rendering results on screen.

Both znear and zfar need to be above 0.0. gluPerspective will not raise a GL error. glFrustum will generate a GL_INVALID_VALUE.

As for glOrtho, yes you can use negative values for znear and zfar.

The vertex transformation pipeline explains how vertices are transformed.

Bad Array Size

We are going to give this example with GL 1.1 but the same principle applies if you are using VBOs or any other feature from a future version of OpenGL.

What's wrong with this code?

GLfloat vertex[] = {0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0};
GLfloat normal[] = {0.0, 0.0, 1.0};
GLfloat color[] = {1.0, 0.7, 1.0, 1.0};
GLushort index[] = {0, 1, 2, 3};
glVertexPointer(3, GL_FLOAT, sizeof(GLfloat)*3, vertex);
glNormalPointer(GL_FLOAT, sizeof(GLfloat)*3, normal);
glColorPointer(4, GL_FLOAT, sizeof(GLfloat)*4, color);
glDrawElements(GL_QUADS, 4, GL_UNSIGNED_SHORT, index);

The intent is to render a single quad, but your array sizes don't match up. You have only 1 normal for your quad while GL wants 1 normal per vertex. You have one RGBA color for your quad while GL wants one color per vertex. You risk crashing your system because the GL driver will be reading from beyond the size of your supplied normal and color array.

This issue is also explained in the FAQ.