From OpenGL Wiki
Revision as of 22:50, 9 September 2011 by Zyx 2000 (talk | contribs) (Undo revision 3942 by Przemo li (talk) Incorrect and inadequate information)
Jump to: navigation, search

Welcome to the FAQ

What is OpenGL?

OpenGL stands for Open Graphics Library. It is an modern 3D API for rendering 3D graphics.

OpenGL API is used for setting up GPU state, transferring data to it and from it, setting up programs for programmable stages of rendering pipeline. Also creating every OpenGL object is part of OpenGL API. Programs executed on GPU, called shaders, are programmed in OpenGL Shading Language. In which you state commands that will be executed on data that will be passed to the GPU. Use of OpenGL API and OpenGL Shading Language, allow for efficient use of GPU hardware.

OpenGL Shading Language have its own specification improved in parallel with OpenGL specification.

What is NOT OpenGL?

OpenGL API is not meant for handling anything outside rendering 3D. OpenGL require 3rd libraries for setting up OpenGL context and initializing OS specific window. OpenGL focus on low level 3D rendering, thus do not provide functionality for animations, timers, i/o handling, windowing, etc.

Who maintains?

The OpenGL Architectural Review Board or ARB.

Is OpenGL Open Source?

No, OpenGL doesn't have any source code. GL is a specification which can be found on this website. It describes the interface the programmer uses and expected behavior. OpenGL is an open specification. Anyone can download the spec for free. This is as opposed to ISO standards and specifications, which cost money to access.

There is an implementation of GL that is Open Source and it is called Mesa3D http://www.mesa3d.org Its announcing itself as OpenGL 2.1 compliant.

Where can I download?

Just like the "Open Source?" section explains, OpenGL is not a software product. it is a specification.

On Mac OS X, Apple's OpenGL implementation is included.

On Windows, companies like nVidia and AMD/ATI use the spec to write their own implementation, so OpenGL is included in the drivers that they supply. For laptop owners, however, you'll need to visit the manufacturer of your laptop and download the drivers from them.

Where can I download? #2

When you update your video driver, this is good enough for people who want to play games or run some application. For programmers, installing drivers will not give you a gl.h file. It will not give you opengl32.lib. Those are files that come with your compiler (on Windows, your compiler might need opengl32.lib or perhaps opengl32.a). Also, there are no updated gl.h and opengl32.lib file. These are stuck at GL 1.1 and will be forever. Read the Getting Started section to learn what you must do. http://www.opengl.org/wiki/Getting_started Also, installing a video driver will not replace opengl32.dll. It is a system file and belongs to Windows. Only Microsoft may update it. When you install a video driver, another file will be copied to your system (nvoglv32.dll in the case of nVidia) and the registry will be modified. opengl32.dll will call into the real GL driver (nvoglv32.dll).

Is there an OpenGL SDK?

There is no actual OpenGL SDK. There is a collection of websites, some (outdated) documentation, and links to tutorials, all found here. But it is not an SDK of the kind you are thinking about.

NVIDIA and ATI have their own SDKs, both of which have various example code for OpenGL.

What platforms have GL?

  • Windows: 95 and above
  • Mac OSX: all versions
  • Linux: this depends on the distributions. Distros meant for desktop usage come with Gnome, KDE or some windows manager and OpenGL is either supplied as Mesa (software rasterizer) or they provide proper drivers.
  • FreeBSD: unknown

OpenGL ES is often supported on embedded systems, but OpenGL ES is a different API from regular OpenGL.

What is an OpenGL context?

And why do you need a window to do GL rendering?

The GL context comprises resources (driver resources in RAM, texture IDs assigned, VBO IDs assigned, enabled states (GL_BLEND, GL_DEPTH_TEST) and many other things). Think of the GL context as some memory allocated by the driver to store some information about the state of your GL program.

You must create a GL context in order for your GL function calls to make sense. You can't just write a minimal program such as this

 int main(int argc, char **argv)
   char *GL_version=(char *)glGetString(GL_VERSION);
   char *GL_vendor=(char *)glGetString(GL_VENDOR);
   char *GL_renderer=(char *)glGetString(GL_RENDERER);
   return 0;

In the above, the programmer simply wants to get information about this system (he doesn't want to render anything) but it simply won't work because no communication has been established with the GL driver. The GL driver also needs to allocate resources with respect to the window such as a backbuffer. Based on the pixelformat you have chosen, there can be a color buffer with some format such as BGRA8. There may or may not be a depth buffer. The depth might contain 24 bits. There might be a 8 bit stencil. There might be an accumulation buffer. Perhaps the pixelformat you have chosen can do multisampling. Up until now, no one has introduced a windowless context.

You must create a window. You must select a pixelformat. You must create a GL context. You must make the GL context current (wglMakeCurrent for Windows and glXMakeCurrent for *nix).

Some people want to do offscreen rendering and they don't want to show a window to the user. The only solution is to create a window and make it invisible, select a pixelformat, create a GL context, make the context current. Now you can make GL function calls. You should make a FBO and render to that. If you chose to not create a FBO and you prefer to use the backbuffer, there is a risk that it won't work.

How Does It Work On Windows?

All Windows versions support OpenGL.

When you compile an application, you link with opengl32.dll (even on Win64).

When you run your program, opengl32.dll gets loaded and it checks in the Windows registry if there is a true GL driver. If there is, it will load it. For example, ATI's GL driver name starts with atioglxx.dll and nVidia's GL driver is nvoglv32.dll. The actual names change from release versions.

opengl32.dll is limited to 1.1. For GL >=1.2 functions, you get a function pointer with wglGetProcAddress. Examples are glActiveTexture, glBindBuffer, glVertexAttribPointer. wglGetProcAddress returns an address from the real driver in these cases.

The only important thing to know is that opengl32.dll belongs to Microsoft. No one can modify it. You must not replace it. You must not ship your application with this file. You must not ship nvoglv32.dll or any other system file either.

It is the responsibility of the user to install the driver made available from Dell, HP, nVidia, ATI/AMD, Intel, SiS, and whatever. Though feel free to remind them to do so.

How do I tell what version of OpenGL I'm using?

Use the function glGetString, with GL_VERSION passed as argument. This will return a null-terminated string. Be careful when copying this string into a fixed-length buffer, as it can be fairly long. Alternatively, you can use glGetIntegerv(GL_MAJOR_VERSION, *) and glGetIntegerv(GL_MINOR_VERSION, *). These require GL 3.0 or greater.

Why is my GL version only 1.4 or lower?

There are two reasons you may get an unexpectedly low OpenGL version.

On Windows, you may be a low GL version if, during context creation, you use an unaccelerated pixel format. This means you get the default implementation of OpenGL. Depending on whether you are using Windows Vista or earlier versions of Windows, this may mean you get a software GL 1.1 implementation, or a hardware GL 1.5 implementation.

The solution to this is to be more careful in your pixel format selection.

The other reason is that the makers of your video card (and therefore the makers of your video drivers) do not provide an up-to-date OpenGL implementation. There are a number of defunct graphics card vendors out there. However, of the non-defunct ones, this is most likely to happen with Intel's integrated GPUs.

Intel does not provide a proper, up-to-date OpenGL implementation for their integrated GPUs. There is nothing that can be done about this. NVIDIA and ATI provide good support for their integrated GPUs.

Are glTranslate/glRotate/glScale hardware accelerated?

No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU. All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho. This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. There are some libraries which you can use for this.

Fixed function and modern GPUs

Modern GPUs no longer support fixed function. Everything is done with shaders. In order to preserve compatibility, the GL driver generates a shader that simulates the fixed function. It is recommended that all new modern programs use shaders. New users need not learn fixed function related operations of GL such as glLight, glMaterial, glTexEnv and many others.

How to render in pixel space

Setup a certain projection matrix

 glOrtho(0.0, WindowWidth, 0.0, WindowHeight, -1.0, 1.0);
 //Setup modelview to identity if you don't need GL to move around objects for you

Notice that y axis goes from bottom to top because of the glOrtho call. You can swap bottom and top parameters if you want y to go from top to bottom. make sure you render your polygons in the right order so that GL doesn't cull them or just call glDisable(GL_CULL_FACE).

Multi indexed rendering

What this means is that each vertex attribute (position, normal, etc) has its own index array. OpenGL (and Direct3D, for that matter) do not support this.

It is up to you the user to adjust your data format so that there is only one index array, which samples from multiple attribute arrays. To do this, you will need to duplicate some attribute data so that all of the attribute lists are the same size.

Quite often, this question is asked by those wanting to use the OBJ file format:

 v 1.52284 39.3701 1.01523
 v 36.7365 17.6068 1.01523
 v 12.4045 17.6068 -32.475
 and so on ...
 n 0.137265 0.985501 -0.0997287
 n 0.894427 0.447214 -8.16501e-08
 n 0.276393 0.447214 -0.850651
 and so on ...
 t 0.6 1
 t 0.5 0.647584
 t 0.7 0.647584
 and so on ...
 f 102/102/102 84/84/84 158/158/158 
 f 158/158/158 84/84/84 83/83/83 
 f 158/158/158 83/83/83 159/159/159 
 and so on ...

The lines that start with an f are the faces. As you can see, each vertex has 3 indices, one for vertex, normal, texcoord. In the example above, luckily the index for each {vertex, normal, texcoord} is identical but you will also encounter cases where they are not. You would have to expand such cases. Example :

 f 1/1/1 2/2/2 3/2/2
 f 5/5/5 6/6/6 3/4/5

so the group 3/2/2 and 3/4/5 are considered a difference vertex entirely even though they both access vertex 3.

You will need to do post-processing on OBJ files before you can use them.

See also

glClear and glScissor

glScissor is one of the few functions that effect on how glClear operates. If you want to clear only a region of the back buffer, then call glScissor and also glEnable(GL_SCISSOR_TEST).

Alternatively, if you have used the scissor test and forgot to glDisable(GL_SCISSOR_TEST), then you might wonder why glClear isn't working the way you want to.


Pay attention to glColorMask, glStencilMask and glDepthMask. For example, if you disable depth writes by calling glDepthMask(FALSE), then all calls to glClear will not clear the depth buffer.

glGetError (or "How do I check for GL errors?)

OpenGL keeps a set of error flags, and each call to glGetError() tests and clears one of those flags. When there are no more error flags set, then glGetError() returns GL_NO_ERROR. So use a little helper function like this to check for GL errors:

  #include <stdio.h>
  #include <GL/gl.h>
  #include <GL/glu.h>

  int checkForGLErrors( const char *s )
    int errors = 0 ;
    int counter = 0 ;

    while ( counter < 1000 )
      GLenum x = glGetError() ;

      if ( x == GL_NO_ERROR )
        return errors ;

      fprintf( stderr, "%s: OpenGL error: %s [%08x]\n", s ? s : "", gluErrorString ( x ), errcnt++ ) ;
      errors++ ;
      counter++ ;

If there is no GL context, glGetError() would return an error code each time it is called since it is an error to call glGetError when there is no GL context. That is the reason why we have added counter < 1000.

What 3D file format should I use?

Newcomers often wonder what 3D file format, for their indices and vertices and texcoords and texture name, to use for their project.

GL doesn't offer any 3D file format because GL is just a low level library. You would either have to use someone else's library or write your own code. You have to decide whether to use an already existing file format or create your own. Newcomers don't want to reinvent the wheel but the fact is, in the games industry, it is very common to reinvent the wheel when it comes to 3D files.

In case you want to use an already existing format, the obj format is very popular because it is in ASCII text. This format is very limited. It is very old.

The 3ds format is also popular. There is even a open source library called lib3ds. It is old and limited. There is no official documentation from the company that created it.

DirectX has the x file format. It supports simple meshes and keyframes and multiple vertex attributes.

Some people use md2 (from Quake 2). md3 from Quake 3. BSP. POD. RAW. LWO. Milkshape. ASE. Some of them belong to the inventor (company) and you are not suppose to use them.

There is COLLADA which uses a XML style and it has become popular for content creators. This format can be read and exported by several 3D editors (example : Blender).

There are many other formats not mentioned here. They are described at http://www.wotsit.org

Memory Usage

It seems to be common to think that there is a memory leak in the OpenGL driver. Some users write simple programs such as this


and they observe that their memory usage goes up each time their Display function is called. That is normal. The driver might allocate some memory space and since the driver is basically a black box, we don't know what it is doing. The driver might be doing some work at optimizing in a secondary thread or preparing some buffering area. We don't know what it is doing, but there is no memory leak.

Some users call glDeleteTextures or glDeleteLists or one of the other delete functions and they notice that memory usage doesn't go down. You can't do anything about it. The driver does its own memory management and it might choose not to deallocate for the time being. Therefore, this is not a memory leak either.

Memory Management

Who manages memory? How does OpenGL manage memory?

Graphics cards have limited memory, if you exceed it by allocating many buffer objects and textures and other GL resources, the driver can store some of it in system RAM. As you use those resources, the driver can swap in and out of VRAM resources as needed. Of course, this slows down rendering. The amount of RAM storage is also limited for the driver and it might return a GL_OUT_OF_MEMORY when you call glGetError(). It might even return a GL_OUT_OF_MEMORY if you have plenty of VRAM and RAM available and you try to allocate a really large buffer object that the driver doesn't like.

The purpose of this section is to answer those who want to know what happens when they allocate resources and the video card runs out of VRAM. This behavior is not documented in the GL specification because it doesn't concern itself with system resources and system design. System design can differ and GL tries to remain system neutral. Some systems don't have a video card. Some systems have an integrated CPU/GPU with shared RAM.

Display List or VA or VBO

Display lists and VA (vertex array) have been with GL since the beginning. VBO was introduced with GL 1.5. Newcomers would like to know which to use since GL is a complicated API with multiple ways to do the same thing.

Display lists are great for static data. The driver probably stores them in video memory and it certainly performs well. The driver optimizes whatever vertex/normal/texcoord format you throw at it. On the otherhand, VBO was introduced into GL 1.5 for a reason. Display list is marked as deprecated in the GL 3.0 specification. In the end, it is up to you if you want to use it in your modern program. Yes, you can store additional function calls in a display list such as glBindTexture but it is unknown how and where such commands are stored. That is implementation specific and you won't find any documents from nVidia or ATI/AMD on how their drivers operate. Geometry data is probably stored in VBO these days. The original intention of display lists was for it to be a macro. The driver just piles up commands in RAM. When you call glCallList, the driver just sends the block of commands. It just plays it back kind of like a cassette tape.

Vertex arrays are stored in RAM. The driver will have to send it to video memory before processing them. The good news is that you can have a dynamic object in it. On the other hand, why would you use it if you have VBO?

GL_ARB_vertex_buffer_object was first introduced during the GL 1.4 era. http://www.opengl.org/registry/specs/ARB/vertex_buffer_object.txt

The GL community calls it VBO. It went into core in GL 1.5. With a GL_ARRAY_BUFFER flag you get to store your vertex attributes and with a GL_ELEMENT_ARRAY_BUFFER flag, you get to store your indices. Using VBO is probably the best choice. You are of course free to benchmark it and chose whatever is optimal for the GPU in question.

Feel free to check out http://www.opengl.org/wiki/General_OpenGL and read the many article on VBO. Some parts of this Wiki refers to it as VBO/IBO (vertex buffer object and index buffer object).

So what if you were to put a VA into a display list? Yes, you can do

 glNewList(DisplayListID, GL_COMPILE);
 glDrawRangeElements or glDrawElements;

and the driver will probably generate a VBO for your display list. Additional info, Once you call glEndList, changing your vertices will not change the vertices that are already stored on GL's side.

So what if you were to put a VBO/IBO into a display list? Benchmark it and see if your GPU likes it. Maybe you'll get a performance boost. Maybe you won't. Additional info, Once you call glEndList, changing your vertices will not change the vertices that are already stored on GL's side.

In summary, use VBO/IBO. Read the GL specification. Read some tutorials. Read the fine documents here on the Wiki.