Difference between revisions of "Texture Mapping"

From OpenGL Wiki
Jump to: navigation, search
m (Windows and other OSes)
Line 62: Line 62:
 
   //but call it after your call to glTexImage2D
 
   //but call it after your call to glTexImage2D
 
<br>
 
<br>
<font color="#ff0000">It has been reported that on some ATI drivers, glGenerateMipmapEXT(GL_TEXTURE_2D) has no effect except if you proceede it with a call to glEnable(GL_TEXTURE_2D). Once again, to be clear, call glTexImage2D, then glEnable, then glGenerateMipmapEXT.</font><br><br>
+
<font color="#ff0000">It has been reported that on some ATI drivers, glGenerateMipmapEXT(GL_TEXTURE_2D) has no effect except if you proceede it with a call to glEnable(GL_TEXTURE_2D). Once again, to be clear, call glTexImage2D, then glEnable, then glGenerateMipmapEXT.</font><br>
If you need anisotropy call<br>
+
We recommend that you use glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE) for standard textures since this will work on both ATI and nVidia (For GL 2.1 only. For GL 3.0, it is deprecated and you should use glGenerateMipmap).<br>
 +
<br>
 +
If you need anisotropy, call<br>
 
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_ANISOTROPY_EXT, Anisotropy);
 
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_ANISOTROPY_EXT, Anisotropy);
 
<br>
 
<br>
 +
The minimum value is 0.0 and max is glGetIntegerv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &MaxAnisotropy).<br>
 
Anisotropy is an extension. It can drag down performance greatly and make your results look better. Use as less as possible.<br>
 
Anisotropy is an extension. It can drag down performance greatly and make your results look better. Use as less as possible.<br>
 +
You need to check if GL_EXT_texture_filter_anisotropic is present.<br>
 +
The spec is here http://www.opengl.org/registry/specs/EXT/texture_filter_anisotropic.txt<br>
 
<br>
 
<br>
 
Define the texture with<br>
 
Define the texture with<br>

Revision as of 12:22, 24 July 2009

First, let's talk about texture types available.

GL has GL_TEXTURE_1D. You can ignore this and use GL_TEXTURE_2D instead.
This has been part of GL since 1.0
Texture coordinates are normalized. That means if you have a dimension like 256, texcoords are from 0.0 to 1.0.
Of course, if you go beyond that, such as -1.0 to 5.0, then the texture will repeat over your polygon.

GL_TEXTURE_2D has width and height and usually the GPU stores this in memory in a format that is quick to access.
For example, small blocks of the texture are stored in sequence so that cache memory works better. This has been part of GL since 1.0
Texture coordinates are normalized. That means if you have a dimension like 256x256, texcoords are from 0.0 to 1.0.
Of course, if you go beyond that, such as -1.0 to 5.0, then the texture will repeat over your polygon.

GL_TEXTURE_3D has width and height and depth and usually the GPU stores this in memory in a format that is quick to access.
Just like 2D, small blocks of the texture are stored in sequence so that cache memory works better but other techniques exist as well.
This has been part of GL since 1.2
Texture coordinates are normalized. That means if you have a dimension like 256x256x256, texcoords are from 0.0 to 1.0.
Of course, if you go beyond that, such as -1.0 to 5.0, then the texture will repeat over your polygon.

GL_TEXTURE_CUBE_MAP has width and height and 6 faces. Kind of like 2D except it has 6 faces and texcoords work in a special way.
This has been part of GL since 1.3
Texture coordinates behave in a special way. You should use str coordinates and these behave as normal vectors. A certain algorithm is used to know which face has been selected, and then which texel is sampled.

GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_RECTANGLE_NV, GL_TEXTURE_RECTANGLE_ARB are supported as extensions. For having non power of 2 dimension 2D textures. Texcoords worked in a unusual way. From 0 to width for S. From 0 to height for the T.
There are certain limitations such as anisotropy might not work, mipmaps are not allowed, only certain wrap modes are supported such as GL_REPEAT, GL_CLAMP_TO_EDGE.
On certain GPUs, the driver will pad your texture with black pixels in order to make it power of 2. This is for getting better performance.
You won't ever see those black pixels.

With GL 2.0, GL_TEXTURE_RECTANGLE becomes obsolete. You can make textures with any dimension, mipmap it, use any anisotropy supported, use any texture wrap mode.


How to create a texture


Create a single texture ID

 uint textureID;
 glGenTextures(1, &textureID);


You have to bind the texture before doing anything to it.

 glBindTexture(GL_TEXTURE_2D, textureID);


Now you should define your texture properties. Any call sequence will work since GL is a state machine.

 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, TextureWrapS);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, TextureWrapT);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, MagFilter);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, MinFilter);


Some people make the mistake of not calling glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, MinFilter); and they get the default state GL_LINEAR_MIPMAP_NEAREST and they don't define the mipmaps, so the texture is considered incomplete and you just get a white texture.

If you will use mipmapping, you can either define them yourself by making many calls to glTexImage2D or let the GPU generate the mipmaps.
Since current GPUs can generate it automatically with a box filter technique, you can call
Mipmapping is usually good and increases performance.

 //Use this if GL 1.4 is supported
 glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
 //Since the above is considered deprecated in GL 3.0, it is recommended that you use glGenerateMipmap(GL_TEXTURE_2D)
 //or if GL_EXT_framebuffer_object is supported, use glGenerateMipmapEXT(GL_TEXTURE_2D)
 //but call it after your call to glTexImage2D


It has been reported that on some ATI drivers, glGenerateMipmapEXT(GL_TEXTURE_2D) has no effect except if you proceede it with a call to glEnable(GL_TEXTURE_2D). Once again, to be clear, call glTexImage2D, then glEnable, then glGenerateMipmapEXT.
We recommend that you use glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE) for standard textures since this will work on both ATI and nVidia (For GL 2.1 only. For GL 3.0, it is deprecated and you should use glGenerateMipmap).

If you need anisotropy, call

 glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_ANISOTROPY_EXT, Anisotropy);


The minimum value is 0.0 and max is glGetIntegerv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &MaxAnisotropy).
Anisotropy is an extension. It can drag down performance greatly and make your results look better. Use as less as possible.
You need to check if GL_EXT_texture_filter_anisotropic is present.
The spec is here http://www.opengl.org/registry/specs/EXT/texture_filter_anisotropic.txt

Define the texture with

 glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, border, format, type, ptexels);



You need to make sure that your width and height are supported by the GPU.

 int Max2DTextureWidth, Max2DTextureHeight;
 glGetIntegerv(GL_MAX_TEXTURE_SIZE, &Max2DTextureWidth);
 Max2DTextureWidth=Max2DTextureHeight;
 int MaxTexture3DWidth, MaxTexture3DHeight, MaxTexture3DDepth;
 glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE, &MaxTexture3DWidth);
 MaxTexture3DHeight=MaxTexture3DDepth=MaxTexture3DWidth;
 int MaxTextureCubemapWidth, MaxTextureCubemapHeight;
 glGetIntegerv(GL_MAX_CUBE_MAP_TEXTURE_SIZE, &MaxTextureCubemapWidth);
 MaxTextureCubemapHeight=MaxTextureCubemapWidth;
 int MaxTextureRECTWidth, MaxTextureRECTHeight;
 glGetIntegerv(GL_MAX_RECTANGLE_TEXTURE_SIZE_ARB, &MaxTextureRECTWidth);
 MaxTextureRECTHeight=MaxTextureRECTWidth;
 int MaxRenderbufferWidth, MaxRenderbufferHeight;
 glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE_EXT, &MaxRenderbufferWidth);
 MaxRenderbufferHeight=MaxRenderbufferWidth;



Very old GPUs don't support border texels.
Make sure the format is supported by the GPU else the drivers will convert into a proper format.
Make sure the internal format is supported by the GPU (example : GL_RGBA8) else the driver will convert the texture for you.
There is no way to query what formats the GPU supports but IHVs (nVidia, AMD/ATI) publish documents on what is supported.
For example, it is very common for GL_RGBA8 to be supported but GL_RGB8 is not.
You should also call glGetError to make you don't get an error message like running out of memory (GL_OUT_OF_MEMORY).

The only thing left is calling glTexEnv but this isn't part of the texture state. It is part of the texture_environment, in other words the texture unit.

To use the texture, bind it to a texture unit with glBindTexture and don't forget to enable texturing with glEnable(GL_TEXTURE_2D) and disable with glDisable(GL_TEXTURE_2D)
That's the basics of creating a texture and using it.

Just allocate memory for a texture

If you want to just allocate memory for the texture but you don't want to initialize the texels, then just give a NULL pointer to glTexImageXD. The GL specification doesn't say what values the texels will have.

 uint textureID;
 glGenTextures(1, &textureID);
 glBindTexture(GL_TEXTURE_2D, textureID);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
 //GL_BGRA and GL_UNSIGNED_BYTE imply the GL_BGRA8 format.
 //The driver might store it as GL_BGRA8 or it might flip red and blue to create GL_RGBA8
 //Most GPUs support the Microsoft standard of GL_BGRA8
 //The following call is garanteed to be very fast on Windows, nVidia, ATI/AMD
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);

Update some pixels

If you want to update some pixels and your pixels are in RAM, use glTexSubImage2D.
Some people use glTexImage2D which causes the previous texture to be deleted and reallocated, which causes a slow down.

 //Never forget to bind!
 glBindTexture(GL_TEXTURE_2D, textureID);
 //GL_BGRA and GL_UNSIGNED_BYTE imply the GL_BGRA8 format.
 //Most GPUs support the Microsoft standard of GL_BGRA8
 //The following call is garanteed to be very fast on Windows, nVidia, ATI/AMD
 glTexSubImage2D(GL_TEXTURE_2D, level, xoffset, yoffset, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);

Copy the frame buffer to the texture

Don't use glCopyTexImage2D. This functions deletes the previous texture and reallocates therefore it is slow.
Use glCopyTexSubImage2D instead, which just updates the texels.
So, you need to render the scene to the backbuffer, don't call SwapBuffers, bind the texture and call glCopyTexSubImage2D

 RenderScene();
 glBindTexture(GL_TEXTURE_2D, textureID);
 glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512, 512);
 SwapBuffers();

glGenTextures

Always use this to generate texture IDs.
Also, glGenTextures does not allocate any texture. It just reserves an ID. It doesn't consume much RAM.
You must call glTexImage2D to really allocate texture memory.
Furthermore, it is possible that the driver won't upload your texture to VRAM when you call glTexImage2D.
It will wait until you use it when rendering.
glGenTextures generates unsigned int (32 bit) values. You can create 2^32-1 texture IDs.
Texture IDs returned start from 1 and go upwards. If you delete a texture with glDeleteTextures, and then you call glGenTextures, the driver may or may not return that same ID. This isn't something you need to worry about. When you shut down your program, it is recommended to call

 glDeleteTextures(1, &textureID)

but you are not obligated. The driver can delete it for you.

glBindTexture

Sometimes, you might run into a code that calls glBindTexture(GL_TEXTURE_2D, 0). It might look something like this

 glEnable(GL_TEXTURE_2D);   //Enable
 glBindTexture(GL_TEXTURE_2D, textureID);
 DrawTheThing();
 glBindTexture(GL_TEXTURE_2D, 0);

glBindTexture(GL_TEXTURE_2D, 0) binds the default texture, which effectively disables texturing.
The advantages of that are not clear.
You are better off really disabling the tex unit with glDisable(GL_TEXTURE_2D).
You could also use some other invalid ID to disable texturing such as 1,000,000 and that will have the same effect as 0.

Texture Storage

Once you upload your texture to GL, you don't need to keep a copy on your side. You can delete it just after calling glTexImage2D or glTexSubImage2D.

Windows and other OSes

Once you upload your texture to GL, where is it stored exactly?
The driver will most likely keep it in RAM. Keep in mind that the driver has its own memory manager.
When you want to render something with this texture, the driver will then upload the texture to VRAM (if you have a system with dedicated video memory). It will likely upload 100% of the texture with all the mipmaps.
If there isn't enough VRAM, the driver will delete another texture or another VBO. That is the driver's choice and there is nothing you can do about it.
The driver will always keep a copy in RAM, even when a copy is made in VRAM. RAM is considered a permanent storage. VRAM is considered volatile. Windows can destroy a texture and take over a part of VRAM if it wants to. That's why drivers always keep a copy in VRAM.
It is possible this will change in the future or perhaps it has already changed with Windows Vista.
Functions such as glPrioritizeTextures and glAreTexturesResident are useless as has been explained on another page.
http://www.opengl.org/wiki/Common_Mistakes#glAreTexturesResident_and_Video_Memory

Games should not use these functions and should not rely on them. Gaming video cards such as nVidia Geforce 9800 and ATI/AMD Radeon HD drivers don't care for these functions. The driver is always the boss and not the programmer.

For other OSes (Linux and Mac and FreeBSD), no comment.
For other video cards such as the workstation video card nVidia Quadro FX, no comment.