Tutorial: OpenGL 3.1 The First Triangle (C++/Win): Difference between revisions
(Correcting statement about "VAO saves states of up to 16 attribute arrays") |
(We don't care about Rosario Leonardi post and about his code crashing. Removing.) |
||
Line 1: | Line 1: | ||
== Overview == | == Overview == | ||
This is just a short tutorial about drawing primitives in OpenGL 3.x without using deprecated functionality | This is just a short tutorial about drawing primitives in OpenGL 3.x without using deprecated functionality. The code uses Visual Studio and a link to download a freeGLUT version is available. | ||
== Adding GLEW Support == | == Adding GLEW Support == |
Revision as of 23:40, 13 March 2011
Overview
This is just a short tutorial about drawing primitives in OpenGL 3.x without using deprecated functionality. The code uses Visual Studio and a link to download a freeGLUT version is available.
Adding GLEW Support
Dealing with OpenGL 3.1 is hard enough, so I'll skip gymnastics with OpenGL extension and use OpenGL Extension Wrangler Library (GLEW). GLEW is a cross-platform open-source C/C++ extension loading library, and can be freely downloaded from the following site: http://glew.sourceforge.net. The following snippet of code includes support for GLEW, and should be placed somewhere in your code. If you are building a Visual Studio MFC application, which I recommend, the best place for that is somewhere at the end of stdafx.h file. A cross-platform version of this code (which uses GLUT for windowing) is available on github
//--- OpenGL ---
#include "glew.h"
#include "wglew.h"
#pragma comment(lib, "glew32.lib")
//--------------
GLRenderer Class
We will start with creation of class CGLRenderer. This class should gather together all OpenGL related code. My students will recognize the functions I insisted on during the lectures. The header file is the same as in good old OpenGL 2.1, but the implementation will be severely changed.
class CGLRenderer
{
public:
CGLRenderer(void);
virtual ~CGLRenderer(void);
bool CreateGLContext(CDC* pDC); // Creates OpenGL Rendering Context
void PrepareScene(CDC* pDC); // Scene preparation stuff
void Reshape(CDC* pDC, int w, int h); // Changing viewport
void DrawScene(CDC* pDC); // Draws the scene
void DestroyScene(CDC* pDC); // Cleanup
protected:
void SetData(); // Creates VAO and VBOs and fill them with data
protected:
HGLRC m_hrc; // OpenGL Rendering Context
CGLProgram* m_pProgram; // Program
CGLShader* m_pVertSh; // Vertex shader
CGLShader* m_pFragSh; // Fragment shader
GLuint m_vaoID[2]; // two vertex array objects, one for each drawn object
GLuint m_vboID[3]; // three VBOs
};
Rendering Context Creation
First we have to create an OpenGL Rendering Context. This is the task for CreateGLContext() function.
bool CGLRenderer::CreateGLContext(CDC* pDC)
{
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 32;
pfd.iLayerType = PFD_MAIN_PLANE;
int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd);
if (nPixelFormat == 0) return false;
BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd);
if (!bResult) return false;
HGLRC tempContext = wglCreateContext(pDC->m_hDC);
wglMakeCurrent(pDC->m_hDC, tempContext);
GLenum err = glewInit();
if (GLEW_OK != err)
{
AfxMessageBox(_T("GLEW is not initialized!"));
}
int attribs[] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 1,
WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
0
};
if(wglewIsSupported("WGL_ARB_create_context") == 1)
{
m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
wglMakeCurrent(NULL,NULL);
wglDeleteContext(tempContext);
wglMakeCurrent(pDC->m_hDC, m_hrc);
}
else
{ //It's not possible to make a GL 3.x context. Use the old style context (GL 2.1 and before)
m_hrc = tempContext;
}
//Checking GL version
const char *GLVersionString = glGetString(GL_VERSION);
//Or better yet, use the GL3 way to get the version number
int OpenGLVersion[2];
glGetIntegerv(GL_MAJOR_VERSION, &OpenGLVersion[0])
glGetIntegerv(GL_MINOR_VERSION, &OpenGLVersion[1])
if (!m_hrc) return false;
return true;
}
Choosing and setting pixel format are the same as in previous version of OpenGL. The new tricks that should be done are:
- Create standard OpenGL (2.1) rendering context which will be used only temporarily (tempContext), and make it current
HGLRC tempContext = wglCreateContext(pDC->m_hDC);
wglMakeCurrent(pDC->m_hDC,tempContext);
- Initialize GLEW
GLenum err = glewInit();
- Setup attributes for a brand new OpenGL 3.1 rendering context
int attribs[] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 1,
WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
0
};
- Create new rendering context
m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
- Delete tempContext
wglMakeCurrent(NULL,NULL);
wglDeleteContext(tempContext);
Have you noticed something odd in this initialization? In order to create new OpenGL rendering context you have to call function wglCreateContextAttribsARB(), which is an OpenGL function and requires OpenGL to be active when it is called. How can we fulfill this when we are at the beginning of OpenGL rendering context creation? The only way is to create an old context, activate it, and while it is active create a new one. Very inconsistent, but we have to live with it!
In this example, we’ve created OpenGL 3.1 rendering context (major version is set to 3, and minor to 1). Currently, it requires NVidia’s ForceWare 182.52 or 190.38 drivers (ups, since ver. 190.38 the new name of NVIDIA display drivers is GeForce/ION). If you don’t have the specified drivers, or for any other reason creation fails, try to change minor version to 0. OpenGL 3.0 rendering context can be created with NVidia’s ForceWare 181.00 drivers or newer, or ATI Catalyst 9.1 drivers or newer.
Scene Preparation
After we have created rendering context, the next step is to prepare scene. In the function PrepareScene() we will do whatever we have to do just once, before the scene is drawn for the first time.
void CGLRenderer::PrepareScene(CDC *pDC)
{
glClearColor (1.0, 1.0, 1.0, 0.0);
m_pProgram = new CGLProgram();
m_pVertSh = new CGLShader(GL_VERTEX_SHADER);
m_pFragSh = new CGLShader(GL_FRAGMENT_SHADER);
m_pVertSh->Load(_T("minimal.vert"));
m_pFragSh->Load(_T("minimal.frag"));
m_pVertSh->Compile();
m_pFragSh->Compile();
m_pProgram->AttachShader(m_pVertSh);
m_pProgram->AttachShader(m_pFragSh);
m_pProgram->BindAttribLocation(0, "in_Position");
m_pProgram->BindAttribLocation(1, "in_Color");
m_pProgram->Link();
m_pProgram->Use();
SetData();
}
Shaders
Vertex shader is very simple. It just sends input values to the output, and converts vec3 to vec4. Constructors are the same as in previous versions of GLSL. The main difference, in regard to GLSL 1.2, is that there is no more attribute and varying qualifiers for variables inside shaders. Attribute variables are now in(put) and varying variables are out(put) for the vertex shaders. Uniforms stay the same.
// Vertex Shader – file "minimal.vert"
#version 140
in vec3 in_Position;
in vec3 in_Color;
out vec3 ex_Color;
void main(void)
{
gl_Position = vec4(in_Position, 1.0);
ex_Color = in_Color;
}
Fragment shader is even simpler. Varying variables in fragment shaders are now declared as in variables. Take care that the name of in(put) variable in fragment shader must be the same as out(put) variable in vertex shader.
// Fragment Shader – file "minimal.frag"
#version 140
precision highp float; // needed only for version 1.30
in vec3 ex_Color;
out vec4 out_Color;
void main(void)
{
out_Color = vec4(ex_Color,1.0);
}
If you have problem with compiling shader’s code (for the reason OpenGL 3.1 is not supported), just change the version number. Instead of 140, put 130. These shaders are so simple that the code is the same in GLSL version 1.3 and version 1.4.
Setting Data
Function SetData() creates VAOs and VBOs and fill them with data.
void CGLRenderer::SetData()
{
// First simple object
float* vert = new float[9]; // vertex array
float* col = new float[9]; // color array
vert[0] =-0.3; vert[1] = 0.5; vert[2] =-1.0;
vert[3] =-0.8; vert[4] =-0.5; vert[5] =-1.0;
vert[6] = 0.2; vert[7] =-0.5; vert[8]= -1.0;
col[0] = 1.0; col[1] = 0.0; col[2] = 0.0;
col[3] = 0.0; col[4] = 1.0; col[5] = 0.0;
col[6] = 0.0; col[7] = 0.0; col[8] = 1.0;
// Second simple object
float* vert2 = new float[9]; // vertex array
vert2[0] =-0.2; vert2[1] = 0.5; vert2[2] =-1.0;
vert2[3] = 0.3; vert2[4] =-0.5; vert2[5] =-1.0;
vert2[6] = 0.8; vert2[7] = 0.5; vert2[8]= -1.0;
// Two VAOs allocation
glGenVertexArrays(2, &m_vaoID[0]);
// First VAO setup
glBindVertexArray(m_vaoID[0]);
glGenBuffers(2, m_vboID);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[0]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[1]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), col, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
// Second VAO setup
glBindVertexArray(m_vaoID[1]);
glGenBuffers(1, &m_vboID[2]);
glBindBuffer(GL_ARRAY_BUFFER, m_vboID[2]);
glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert2, GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glBindVertexArray(0);
delete [] vert;
delete [] vert2;
delete [] col;
}
Vertex buffer objects (VBO) are familiar item since OpenGL version 1.5, but the vertex array objects require more explanation. Vertex array objects (VAO) encapsulate vertex array state on the client side. These objects allow applications to rapidly switch between large sets of array state.
VAO saves all states for all vertex attributes. The maximum number supported by your video card can be obtained by calling glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &MaxVertexAttribs).
A VAO stores the states of the vertex attribute arrays (if each of them is enabled, their sizes, stride, type, if they are normalized or not, if they contain unconverted integers, vertex attribute array pointers, element array buffer bindings and attribute array buffer bindings). In order to test how it works, we will create two separate (simple) objects with different VAOs.
Setting Viewport
Reshape() function just sets a viewport.
void CGLRenderer::Reshape(CDC *pDC, int w, int h)
{
glViewport(0, 0, w, h);
}
Drawing
DrawScene(), as its name implies, draws the scene.
void CGLRenderer::DrawScene(CDC *pDC)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(m_vaoID[0]); // select first VAO
glDrawArrays(GL_TRIANGLES, 0, 3); // draw first object
glBindVertexArray(m_vaoID[1]); // select second VAO
glVertexAttrib3f((GLuint)1, 1.0, 0.0, 0.0); // set constant color attribute
glDrawArrays(GL_TRIANGLES, 0, 3); // draw second object
glBindVertexArray(0);
SwapBuffers(pDC->m_hDC);
}
As we can see, VAO binding changes all vertex attribute arrays settings. But be very careful! If any vertex attribute array is disabled, VAO loses its binding to corresponding VBO. In that case, we have to call again glBindBuffer() and glVertexAttribPointer() functions. The specification tells nothing about this feature, but it is what we have to do with current version of NVidia drivers.
Cleaning up
And, at the end we have to clean up the whole mass...
void CGLRenderer::DestroyScene(CDC *pDC)
{
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDeleteBuffers(3, m_vboID);
glBindVertexArray(0);
glDeleteVertexArrays(2, m_vaoID);
m_pProgram->DetachShader(m_pVertSh);
m_pProgram->DetachShader(m_pFragSh);
delete m_pProgram;
m_pProgram = NULL;
delete m_pVertSh;
m_pVertSh = NULL;
delete m_pFragSh;
m_pFragSh = NULL;
wglMakeCurrent(NULL, NULL);
if(m_hrc)
{
wglDeleteContext(m_hrc);
m_hrc = NULL;
}
}