FAQ

From OpenGL Wiki
Revision as of 17:18, 11 March 2011 by V-man (talk | contribs) (Triangles or Quads)
Jump to: navigation, search

Welcome to the FAQ

What is OpenGL?

OpenGL stands for Open Graphics Library. It is an API for doing 3D graphics.

In more specific terms, it is an API that is used to "draw triangles on your scene". In this age of GPUs, it is about talking to the GPU so that it does the job of drawing. It does not deal with file formats. It does not open bmp, png and any image format. It does not open 3d object formats like obj, max, maya. It does not do animation. It does not handle keyboard, mouse and any input devices. It does not create a window, and so on.

All that stuff should be handled by an external library (GLUT is one example that is used for creating and destroying a window and handling mouse and keyboard).

GL has gone through a number of versions.

Who maintains?

The OpenGL Architectural Review Board or ARB.

Open Source?

No, OpenGL doesn't have any source code. GL is a specification which can be found on this website. It describes the interface the programmer uses and expected behavior. OpenGL is an open specification. Anyone can download the spec for free. This is as opposed to ISO standards and specifications, which cost money to access.

There is an implementation of GL that is Open Source and it is called Mesa3D http://www.mesa3d.org

It doesn't have the license to call itself OpenGL, but it does follow the spec very well.

Where can I download?

Just like the "Open Source?" section explains, OpenGL is not a software product. it is a specification.

On Mac OS X, Apple's OpenGL implementation is included.

On Windows, companies like nVidia and AMD/ATI use the spec to write their own implementation, so OpenGL is included in the drivers that they supply. For laptop owners, however, you'll need to visit the manufacturer of your laptop and download the drivers from them.

Where can I download? #2

When you update your video driver, this is good enough for people who want to play games or run some application. For programmers, installing drivers will not give you a gl.h file. It will not give you opengl32.lib. Those are files that come with your compiler (on Windows, your compiler might need opengl32.lib or perhaps opengl32.a). Also, there are no updated gl.h and opengl32.lib file. These are stuck at GL 1.1 and will be forever. Read the Getting Started section to learn what you must do. http://www.opengl.org/wiki/Getting_started Also, installing a video driver will not replace opengl32.dll. It is a system file and belongs to Windows. Only Microsoft may update it. When you install a video driver, another file will be copied to your system (nvoglv32.dll in the case of nVidia) and the registry will be modified. opengl32.dll will call into the real GL driver (nvoglv32.dll).

SDK

There is no actual OpenGL SDK. There is a collection of websites, some (outdated) documentation, and links to tutorials, all found here. But it is not an SDK of the kind you are thinking about.

NVIDIA and ATI have their own SDKs, both of which have various example code for OpenGL.

What platforms have GL?

  • Windows: 95 and above
  • Mac OSX: all versions
  • Linux: this depends on the distributions. Distros meant for desktop usage come with Gnome, KDE or some windows manager and OpenGL is either supplied as Mesa (software rasterizer) or they provide proper drivers.
  • FreeBSD: unknown

OpenGL ES is often supported on embedded systems, but OpenGL ES is a different API from regular OpenGL.

How Does It Work On Windows?

All Windows versions support OpenGL.

When you compile an application, you link with opengl32.dll (even on Win64).

When you run your program, opengl32.dll gets loaded and it checks in the Windows registry if there is a true GL driver. If there is, it will load it. For example, ATI's GL driver name starts with atioglxx.dll and nVidia's GL driver is nvoglv32.dll. The actual names change from release versions.

opengl32.dll is limited to 1.1. For GL >=1.2 functions, you get a function pointer with wglGetProcAddress. Examples are glActiveTexture, glBindBuffer, glVertexAttribPointer. wglGetProcAddress returns an address from the real driver in these cases.

The only important thing to know is that opengl32.dll belongs to Microsoft. No one can modify it. You must not replace it. You must not ship your application with this file. You must not ship nvoglv32.dll or any other system file either.

It is the responsibility of the user to install the driver made available from Dell, HP, nVidia, ATI/AMD, Intel, SiS, and whatever. Though feel free to remind them to do so.

How do I tell what version of OpenGL I'm using?

Use the function glGetString(GL_VERSION). This will return a null-terminated string. Do not copy this string into a fixed-length buffer, as it can be fairly long.

The string has a specific format:

 <major version>.<minor version>

Following the minor version can be another '.' with a vendor-specific build number. Following that is entirely vendor-specific information. The format of this information is up to the driver writers.

Alternatively, you can use glGetIntegerv(GL_MAJOR_VERSION, *) and glGetIntegerv(GL_MINOR_VERSION, *). These require GL 3.0 or greater, however.

Why is my GL version only 1.4 or lower?

There are two reasons you may get an unexpectedly low OpenGL version.

On Windows, you may be a low GL version if, during context creation, you use an unaccelerated pixel format. This means you get the default implementation of OpenGL. Depending on whether you are using Windows Vista or earlier versions of Windows, this may mean you get a software GL 1.1 implementation, or a hardware GL 1.5 implementation.

The solution to this is to be more careful in your pixel format selection.

The other reason is that the makers of your video card (and therefore the makers of your video drivers) do not provide an up-to-date OpenGL implementation. There are a number of defunct graphics card vendors out there. However, of the non-defunct ones, this is most likely to happen with Intel's integrated GPUs.

Intel does not provide a proper, up-to-date OpenGL implementation for their integrated GPUs. There is nothing that can be done about this. NVIDIA and ATI provide good support for their integrated GPUs.

glTranslate, glRotate, glScale

Are these hardware accelerated?

No, there are no known GPUs that execute this. The driver computes the matrix on the CPU and uploads it to the GPU.

All the other matrix operations are done on the CPU as well : glPushMatrix, glPopMatrix, glLoadIdentity, glFrustum, glOrtho.

This is the reason why these functions are considered deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader.

A list of various libraries is at Alternative Game Libraries.

Fixed function and modern GPUs

Modern GPUs no longer support fixed function. Everything is done with shaders. In order to preserve compatibility, the GL driver generates a shader that simulates the fixed function. It is recommended that all new modern programs use shaders. New users need not learn fixed function related operations of GL such as glLight, glMaterial, glTexEnv and many others.

How to render in pixel space

Setup a certain projection matrix

 glMatrixMode(GL_PROJECTION);
 glLoadIdentity();
 glOrtho(0.0, WindowWidth, 0.0, WindowHeight, -1.0, 1.0);
 //Setup modelview to identity if you don't need GL to move around objects for you
 glMatrixMode(GL_MODELVIEW);
 glLoadIdentity();

Notice that y axis goes from bottom to top because of the glOrtho call. You can swap bottom and top parameters if you want y to go from top to bottom. make sure you render your polygons in the right order so that GL doesn't cull them or just call glDisable(GL_CULL_FACE).

Multi indexed rendering

What this means is that each vertex attribte (position, normal, etc) has its own index array. OpenGL (and Direct3D, for that matter) do not support this.

It is up to you the user to adjust your data format so that there is only one index array, which samples from multiple attribute arrays. To do this, you will need to duplicate some attribute data so that all of the attribute lists are the same size.

Quite often, this question is asked by those wanting to use the OBJ file format:

 v 1.52284 39.3701 1.01523
 v 36.7365 17.6068 1.01523
 v 12.4045 17.6068 -32.475
 and so on ...
 n 0.137265 0.985501 -0.0997287
 n 0.894427 0.447214 -8.16501e-08
 n 0.276393 0.447214 -0.850651
 and so on ...
 t 0.6 1
 t 0.5 0.647584
 t 0.7 0.647584
 and so on ...
 f 102//102//102 84//84//84 158//158//158 
 f 158//158//158 84//84//84 83//83//83 
 f 158//158//158 83//83//83 159//159//159 
 and so on ...

The lines that start with an f are the faces. As you can see, each vertex has 3 indices, one for vertex, normal, texcoord.

You will need to do post-processing on OBJ files before you can use them.

See also

glClear and glScissor

glScissor is one of the few functions that effect on how glClear operates. If you want to clear only a region of the back buffer, then call glScissor and also glEnable(GL_SCISSOR_TEST).

Alternatively, if you have used the scissor test and forgot to glDisable(GL_SCISSOR_TEST), then you might wonder why glClear isn't working the way you want to.

Masking

Pay attention to glColorMask, glStencilMask and glDepthMask. For example, if you disable depth writes by calling glDepthMask(FALSE), then all calls to glClear will not clear the depth buffer.

glGetError (or "How do I check for GL errors?)

OpenGL keeps a set of error flags, and each call to glGetError() tests and clears one of those flags. When there are no more error flags set, then glGetError() returns GL_NO_ERROR. So use a little helper function like this to check for GL errors:


  #include <stdio.h>
  #include <GL/gl.h>
  #include <GL/glu.h>

  int checkForGLErrors( const char *s )
  {
    int errors = 0 ;
    int counter = 0 ;

    while ( counter < 1000 )
    {
      GLenum x = glGetError() ;

      if ( x == GL_NO_ERROR )
        return errors ;

      fprintf( stderr, "%s: OpenGL error: %s [%08x]\n", s ? s : "", gluErrorString ( x ), errcnt++ ) ;
      errors++ ;
      counter++ ;
    }
  }

If there is no GL context, glGetError() would return an error code each time it is called since it is an error to call glGetError when there is no GL context. That is the reason why we have added counter < 1000.

Framerate and FPS

It is common for people to measure FPS which stands for frames per seconds. This is considered as rendering speed by most gamers and new programmers. New programmers render a cube and their FPS shows 2000. They add a few more complexities and suddenly, the FPS drops to 200 and they can't understand what went wrong.

First, FPS is not a great way to measure performance because it is not linear. It is better to measure Frame Time. Frame Time is 1/FPS therefore

1/2000 = the Frame Time is 0.0005 seconds

1/200 = the Frame Time is 0.0050 seconds

You are in fact already measuring Frame Time but you are not paying attention to it. You are then doing 1/(Frame Time) = FPS and you are turning your attention to this value.

Let's take another example : Let's say the FPS is 180. You add a few models to your scene and you end up with 160. How bad is that? Yes, you did lose 20 points but how many seconds longer is it taking?

1/180 = 0.0056 seconds

1/160 = 0.00625 seconds

0.00625 - 0.0056 = 0.00065 seconds (the difference)

Let's continue the example. Assume your FPS is 60 and you add some models and your FPS drops to 40. How bad is that?

1/60 = 0.01667 seconds

1/40 = 0.02500 seconds

0.02500 - 0.01667 = 0.00833 seconds (the difference)

Notice how long it is taking to render?

Framerate and FPS #2

So you measured you FPS and you got 2000. You are just rendering a cube or perhaps a single triangle or perhaps even nothing at all. Then you test your program on another machine with a faster CPU, faster GPU and lots more RAM. You get 1000 FPS and you wonder what you did wrong.

The OpenGL driver is written to optimize communication with the GPU. Rendering nothing or rendering 1 triangle is meaningless. You need to throw a lot more triangles at it and use some shaders that do lighting or some special effect. Your FPS should be in the range of 60 (0.01667 seconds) to 100 FPS (0.01 seconds). Anything over 100 FPS is considered meaningless.

There is also the refresh rate of the monitor to consider. If the refresh rate is 60 Hz, then the monitor can paint 60 different images per second and not more. If your FPS is 100, you won't be seeing some of those frames. It is recommended to vsync or at least give the user the option to activate/deactivate it. You can read about it http://www.opengl.org/wiki/SwapInterval_aka_vsync

If you are making a game, you would want to know what your monitor's refresh rate is. You would need to render, do your physics, do your game logic as fast as possible so that it takes less than your monitor's speed. For example, if your monitor's refresh rate is 60 Hz, this means 1/60 Hz = 0.01667 seconds for each frame. You have 0.01667 seconds to render, do your physics, do your game logic. If you have some extra time left, then that is not a problem.

3D File Format

Newcomers often wonder what 3D file format, for their indices and vertices and texcoords and texture name, to use for their project.

GL doesn't offer any 3D file format because GL is just a low level library. You would either have to use someone else's library or write your own code. You have to decide whether to use an already existing file format or create your own. Newcomers don't want to reinvent the wheel but the fact is, in the games industry, it is very common to reinvent the wheel when it comes to 3D files.

In case you want to use an already existing format, the obj format is very popular because it is in ASCII text. This format is very limited. It is very old.

The 3ds format is also popular. There is even a open source library called lib3ds. It is old and limited. There is no official documentation from the company that created it.

DirectX has the x file format. It supports simple meshes and keyframes and multiple vertex attributes.

Some people use md2 (from Quake 2). md3 from Quake 3. BSP. POD. RAW. LWO. Milkshape. ASE. Some of them belong to the inventor (company) and you are not suppose to use them.

There is COLLADA which uses a XML style and it has become popular for content creators. This format can be read and exported by several 3D editors (example : Blender).

There are many other formats not mentioned here. They are described at http://www.wotsit.org

Triangles or Quads

GL has supported GL_QUADS and GL_QUAD_STRIP since version 1.0, but as you already might know, this doesn't necessarily reflect what the hardware is capable of. Pretty much all GPU only support GL_POINTS (probably with pixel size 1 only), GL_LINES (probably with pixel size 1 only), GL_TRIANGLE_STRIP and GL_TRIANGLES. GL_TRIANGLE_FAN has also been a candidate for dropping support for.

GL_POINTS with pixel size bigger than 1 are emulated. Probably by rendering triangles.

GL_LINES with pixel size bigger than 1 are emulated. Probably by rendering triangles.

GL_QUADS are emulated by breaking the quad into 2 GL_TRIANGLES or 1 GL_TRIANGLE_STRIP. The same applies to GL_QUAD_STRIP. This is because the primitive assembler doesn't understand quads. The rest of the GPU pipeline isn't designed to deal with quads either (the interpolators of the GPU).

Triangles are preferred because there are well known and very fast hardware algorithms to render them.

That is the reason why in GL 3.0, quads became deprecated.

It is highly recommended that you render your triangles as GL_TRIANGLES. It is highly recommended that you use glDrawElements or glDrawRangeElements. This way, you can render a large mesh made up of thousands of triangles with a single call to glDrawElements or glDrawRangeElements (if all those triangles need the same shader and the same textures).

For the case of GL_TRIANGLE_STRIP, GL 3.1 makes a new feature available called primitive restart (glEnable(GL_PRIMITIVE_RESTART) and glPrimitiveRestartIndex). With that, you can render an entire mesh made of many triangle strips with just a single call to glDrawElements or glDrawRangeElements.

For GL_TRIANGLE_STRIP, you would need a triangle stripper such as NvTriStipper. There is also Forsyth's algorithm. We recommend that you search the forums here at www.opengl.org or do some reading since it is a complex topic.

For GL_TRIANGLES, you would also have to optimize your index and vertex arrays.