First, let's talk about texture types available.
GL has GL_TEXTURE_1D. You can ignore this and use GL_TEXTURE_2D instead.
This has been part of GL since 1.0
GL_TEXTURE_2D has width and height and usually the GPU stores this in memory in a format that is quick to access.
For example, small blocks of the texture are stored in sequence so that cache memory works better. This has been part of GL since 1.0
GL_TEXTURE_3D has width and height and depth and usually the GPU stores this in memory in a format that is quick to access.
Just like 2D, small blocks of the texture are stored in sequence so that cache memory works better but other techniques exist as well.
This has been part of GL since 1.2
GL_TEXTURE_CUBE_MAP has width and height and 6 faces. Kind of like 2D except it has 6 faces and texcoords work in a special way.
This has been part of GL since 1.3
GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_RECTANGLE_NV, GL_TEXTURE_RECTANGLE_ARB are supported as extensions. For having non power of 2 dimension 2D textures. Texcoords worked in a unusual way. From 0 to width for S. From 0 to height for the T.
With GL 2.0, GL_TEXTURE_RECTANGLE becomes obsolete. You can make textures with any dimension, mipmap it, use any anisotropy supported, use any texture wrap mode.
How to create a texture
Create a single texture ID
uint textureID; glGenTextures(1, &textureID);
You have to bind the texture before doing anything to it.
Now you should define your texture properties. Any call sequence will work since GL is a state machine.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, TextureWrapS); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, TextureWrapT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, MagFilter); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, MinFilter);
Some people make the mistake of not calling glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, MinFilter); and they get the default state GL_LINEAR_MIPMAP_NEAREST and they don't define the mipmaps, so the texture is considered incomplete and you just get a white texture.
If you will use mipmapping, you can either define them yourself by making many calls to glTexImage2D or let the GPU generate the mipmaps.
Since current GPUs can generate it automatically with a box filter technique, you can call
Mipmapping is usually good and increases performance.
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
If you need anisotropy call
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_ANISOTROPY_EXT, Anisotropy);
Anisotropy is an extension. It can drag down performance greatly and make your results look better. Use as less as possible.
Define the texture with
glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, border, format, type, ptexels);
You need to make sure that your width and height are supported by the GPU.
int Max2DTextureWidth, Max2DTextureHeight; glGetIntegerv(GL_MAX_TEXTURE_SIZE, &Max2DTextureWidth); Max2DTextureWidth=Max2DTextureHeight; int MaxTexture3DWidth, MaxTexture3DHeight, MaxTexture3DDepth; glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE, &MaxTexture3DWidth); MaxTexture3DHeight=MaxTexture3DDepth=MaxTexture3DWidth; int MaxTextureCubemapWidth, MaxTextureCubemapHeight; glGetIntegerv(GL_MAX_CUBE_MAP_TEXTURE_SIZE, &MaxTextureCubemapWidth); MaxTextureCubemapHeight=MaxTextureCubemapWidth; int MaxTextureRECTWidth, MaxTextureRECTHeight; glGetIntegerv(GL_MAX_RECTANGLE_TEXTURE_SIZE_ARB, &MaxTextureRECTWidth); MaxTextureRECTHeight=MaxTextureRECTWidth; int MaxRenderbufferWidth, MaxRenderbufferHeight; glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE_EXT, &MaxRenderbufferWidth); MaxRenderbufferHeight=MaxRenderbufferWidth;
Very old GPUs don't support border texels.
Make sure the format is supported by the GPU else the drivers will convert into a proper format.
Make sure the internal format is supported by the GPU (example : GL_RGBA8) else the driver will convert the texture for you.
There is no way to query what formats the GPU supports but IHVs (nVidia, AMD/ATI) publish documents on what is supported.
For example, it is very common for GL_RGBA8 to be supported but GL_RGB8 is not.
You should also call glGetError to make you don't get an error message like running out of memory (GL_OUT_OF_MEMORY).
The only thing left is calling glTexEnv but this isn't part of the texture state. It is part of the texture_environment, in other words the texture unit.
To use the texture, bind it to a texture unit with glBindTexture and don't forget to enable texturing with glEnable(GL_TEXTURE_2D) and disable with glDisable(GL_TEXTURE_2D)
That's the basics of creating a texture and using it.
Just allocate memory for a texture
If you want to just allocate memory for the texture but you don't want to initialize the texels, then just give a NULL pointer to glTexImageXD. The GL specification doesn't say what values the texels will have.
uint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Copy the frame buffer to the texture
Don't use glCopyTexImage2D. This functions deletes the previous texture and reallocates therefore it is slow.
Use glCopyTexSubImage2D instead, which just updates the texels.
So, you need to render the scene to the backbuffer, don't call SwapBuffers, bind the texture and call glCopyTexSubImage2D
RenderScene(); glBindTexture(GL_TEXTURE_2D, textureID); glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512, 512); SwapBuffers();