User:Alfonse/Vertex Specification

From OpenGL Wiki
< User:Alfonse
Revision as of 16:32, 24 May 2018 by Alfonse (talk | contribs) (→‎Buffer bindings: Added a section on index buffer binding.)
Jump to navigation Jump to search

Vertex Specification is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.

Theory

Submitting vertex data for rendering requires creating a stream of vertices, and then telling OpenGL how to interpret that stream.

Vertex Stream

In order to render at all, you must be using a shader program or program pipeline which includes a Vertex Shader. The VS's user-defined input variables defines the list of expected Vertex Attributes for that shader. This set of attributes defines what values the vertex stream must provide to properly render with this shader.

For each attribute in the shader, you must provide an array of data for that attribute. All of these arrays must have the same number of elements. Note that these arrays are a bit more flexible than C arrays, but overall work the same way.

The order of vertices in the stream is very important; this order defines how OpenGL will process and render the Primitives the stream generates. There are two ways of rendering with arrays of vertices. You can generate a stream in the array's order, or you can use a list of indices to define the order. The indices control what order the vertices are received in, and indices can specify the same array element more than once.

Let's say you have the following as your array of 3d position data:

 { {1, 1, 1}, {0, 0, 0}, {0, 0, 1} }

If you simply use this as a stream as is, OpenGL will receive and process these three vertices in order (left-to-right). However, you can also specify a list of indices that will select which vertices to use and in which order.

Let's say we have the following index list:

 {2, 1, 0, 2, 1, 2}

If we render with the above attribute array, but selected by the index list, OpenGL will receive the following stream of vertex attribute data:

 { {0, 0, 1}, {0, 0, 0}, {1, 1, 1}, {0, 0, 1}, {0, 0, 0}, {0, 0, 1} }

The index list is a way of reordering the vertex attribute array data without having to actually change it. This is mostly useful as a means of data compression; in most tight meshes, vertices are used multiple times. Being able to store the vertex attributes for that vertex only once is very economical, as a vertex's attribute data is generally around 32 bytes, while indices are usually 2-4 bytes in size.

A vertex stream can of course have multiple attributes. You can take the above position array and augment it with, for example, a texture coordinate array:

 { {0, 0}, {0.5, 0}, {0, 1} }

The vertex stream you get will be as follows:

 { [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{1, 1, 1}, {0, 0}], [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{0, 0, 1}, {0, 1}] }
Note: Oftentimes, authoring tools will have attribute arrays, but each attribute array will have its own separate index array. This is done to make each attribute's array smaller. OpenGL (and Direct3D, if you're wondering) does not allow this. Only one index array can be used, and each attribute array is indexed with the same index. If your mesh data has multiple index arrays, you must convert the format exported by your authoring tool into the format described above.

Primitives

The above stream is not enough to actually draw anything; you must also tell OpenGL how to interpret this stream. And this means telling OpenGL what kind of primitive to interpret the stream as.

There are many ways for OpenGL to interpret a stream of, for example, 12 vertices. It can interpret the vertices as a sequence of triangles, points, or lines. It can even interpret these differently; it can interpret 12 vertices as 4 independent triangles (take every 3 verts as a triangle), as 10 dependent triangles (every group of 3 sequential vertices in the stream is a triangle), and so on.

The main article on Primitives has the details.

Vertex Array Object

Vertex Array Object
Core in version 4.6
Core since version 3.0
Core ARB extension ARB_vertex_array_object

A Vertex Array Object (VAO) is an OpenGL Object that stores all of the state needed to supply vertex data in arrays to a Vertex Rendering command. As a typical OpenGL object, one must bind the object before using it.

However, unlike most OpenGL objects, the functions that manipulate it aren't always named consistently. So it is not always easy to tell which functions manipulate a VAO and which do not.

Note: Every function mentioned in this page, unless explicitly stated, manipulates a VAO. And therefore, you must bind the VAO they are to affect before you can call these functions.

Anatomy

Anatomy of a Vertex Array Object
Diagram of the contents of a vertex array object

When rendering, OpenGL takes a number of vertex streams defined in arrays and reads the data from them, passing that data as attributes to the current Vertex Shader. To do this, OpenGL needs to understand the way data is stored in these arrays.

In C or C++, this would be easy. The type used to declare an array tells the compiler what a single element of the array is. So you would declare a variable like vec4 attrib[20], and the compiler would figure out how to convert attrib[10] into a specific address.

OpenGL requires that you explicitly spell everything out, rather than using a simple C or C++ typedef. So, let us dissect the C++ array declaration, vec4 attrib[20].

This declaration has a name, attrib, which identifies that specific array. Not only does this array object have a name, by declaring it, we are also giving it storage. The array has 20 elements in it, and each element is of the type vec4. Given this information, the compiler can determine the size of an element (based on the sizeof(vec4)). With that information, the compiler knows that attrib[X] is X * sizeof(vec4) bytes from the start of the array.

OpenGL needs a way for us to communicate the same idea, and VAOs are how we do that.

VAO data is split into two distinct kinds of data: a sequence of attribute formats and a sequence of buffer binding points. The attribute format describes what a single element of the array's data looks like; it is the equivalent of describing what a type like vec4 means to the compiler.

The buffer binding point tells OpenGL about the array itself: where the storage for the array is, how many bytes are between elements in the array, and so forth.

Structurally speaking, think of a VAO like a set of C++ structs defined as follows:

struct AttributeFormat
{
  bool      arrayEnabled;
  BaseType  baseType;  //An enumeration of Float, Integer, and Double.
  GLuint    componentCount;
  GLenum    componentType; //An enumeration of OpenGL types.
  bool      normalization;
  GLuint    bufferBindingIndex;
  GLuint    relativeOffset;
};

struct BufferBinding
{
  GLuint    bufferObject
  GLintptr  baseOffset;
  GLintptr  stride;
  GLuint    instanceDivisor;
};

struct VertexArrayObject
{
  AttributeFormat attribs[GL_MAX_VERTEX_ATTRIBS];
  BufferBinding bindings[GL_MAX_VERTEX_ATTRIB_BINDINGS];
  GLuint indexBuffer;
};

Attribute format

For every Vertex Attribute, a VAO contains a corresponding format, indexed by the attribute index which it describes. Vertex attributes are indexed on the range [0, GL_MAX_VERTEX_ATTRIBS - 1]. All commands that manipulate the format for an attribute take a parameter named index​ or attribindex​ that specifies the attribute being modified.

Attribute formats tell OpenGL whether or not that attribute gets its data from an array at all. This is governed by the array enabled/disable state for the attribute. A newly created VAO sets all of its attribute formats to be disabled. Enabling an attribute for array access uses this function:

void glEnableVertexAttribArray(GLuint index​);

There is a similar glDisableVertexAttribArray function to disable an attribute array. Initially, all attributes for a VAO are disabled.

If an attribute's array is disabled, then the rest of the state for the attribute format is irrelevant.

Each enabled attribute array gets its data from a source buffer. Because multiple attributes can read data from the same buffer, this information is specified by as an index into the array of buffer binding points. Each attribute format specifies which buffer binding point provides the storage for the array.

By default, the buffer binding index for an attribute is the same as the attribute index. With OpenGL OpenGL 4.3 or ARB_vertex_attrib_binding, the buffer buffer binding index that provides storage for an attribute is specified with:

void glVertexAttribBinding(GLuint attribindex​, GLuint bindingindex​);

bindingindex​ is the index of the buffer binding point that provides the storage for the attribute format array specified by attribindex​.

While each attribute format can only source data from a single buffer binding point, multiple buffer binding points can be used by different attributes. This allows multiple arrays to live in the same memory, so as to provide the ability to have [# Interleaved attributes|arrays of larger data structures]].

Without OpenGL 4.3 or ARB_vertex_attrib_binding, the buffer binding index is always the same as the attribute index.

Vertex format

The previous format information are equivalent to telling C++ that there is a variable named attrib and that it is an array that has storage. The rest of the information in an attribute format describes that element's data in the format.

The format parameters describe how to interpret a single vertex of information from the array. Vertex Shader input variables can be declared as a single-precision floating-point GLSL type (such as float or vec4), an integral type (such as uint or ivec3), or a double-precision type (such as double or dvec4). Double-precision attributes are only available in OpenGL 4.1 or ARB_vertex_attrib_64bit.

These three general types correspond to the three functions used to define the rest of the vertex format's data:

void glVertexAttribFormat(GLuint attribindex​, GLint size​, GLenum type​, GLboolean normalized​, GLuint relativeoffset​);

void glVertexAttribIFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);

void glVertexAttribLFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);

These three functions, in addition to the other parameters of these functions, set the basic type of the attribute: float, integer, or double, respectively. If the vertex shader input variable with the attribindex​ attribute location does not match the basic type set by glVertexAttrib*Format, then the value of the attribute will be undefined when you render.

The meaning of the relativeoffset​ parameter will be discussed in the section on buffer bindings. While it is a format parameter, it means a great deal to how the attribute gets its data from the buffer it is associated with.

Each individual attribute index provides a vector of some type, from 1 to 4 components in length. The size​ parameter of the glVertexAttrib*Format functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4.

Note that size​ does not have to exactly match the size used by the vertex shader. If the vertex shader has fewer components than the attribute provides, then the extras are ignored. If the vertex shader has more components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the missing XYZW components.

The latter is not true for double-precision inputs (OpenGL 4.1 or ARB_vertex_attrib_64bit). If the shader attribute has more components than the provided value, the extra components will have undefined values.

Component type

The type of the vector component in the buffer object is given by the type​ and normalized​ parameters, where applicable. This type will be converted into the actual type used by the vertex shader. The different glVertexAttrib*Format functions take different type​s. Here is a list of the types and their meanings for each function:

glVertexAttribFormat:

  • Floating-point types. normalized​ must be GL_FALSE
    • GL_HALF_FLOAT​: A 16-bit half-precision floating-point value. Equivalent to GLhalf.
    • GL_FLOAT​: A 32-bit single-precision floating-point value. Equivalent to GLfloat.
    • GL_DOUBLE​: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to GLdouble.
    • GL_FIXED​: A 16.16-bit fixed-point two's complement value. Equivalent to GLfixed.
  • Integer types; these are converted to floats automatically. If normalized​ is GL_TRUE, then the value will be converted to a float via integer normalization (an unsigned byte value of 255 becomes 1.0f). If normalized​ is GL_FALSE, it will be converted directly to a float as if by C-style casting (255 becomes 255.0f, regardless of the size of the integer).
    • GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte.
    • GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte.
    • GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort.
    • GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort.
    • GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint.
    • GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint.
    • GL_INT_2_10_10_10_REV​: A series of four values packed in a 32-bit unsigned integer. Each individual packed value is a two's complement signed integer, but the overall bitfield is unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
    • GL_UNSIGNED_INT_2_10_10_10_REV: A series of four values packed in a 32-bit unsigned integer. The packed values are unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
    • GL_UNSIGNED_INT_10F_11F_11F_REV: Requires OpenGL 4.4 or ARB_vertex_type_10f_11f_11f_rev. This represents a 3-element vector of floats, packed into a 32-bit unsigned integer. The bitdepth for the packed fields is 10, 11, 11, but in reverse order. So the lowest 11 bits are the first component, the next 11 are the second, and the last 10 are the third. These floats are the low bitdepth floats, packed exactly like the image format GL_R11F_G11F_B10F. If you use this, the size​ must be 3.

glVertexAttribIFormat: This function only feeds attributes declared in GLSL as signed or unsigned integers, or vectors of the same.

  • GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte.
  • GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte.
  • GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort.
  • GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort.
  • GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint.
  • GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint.

glVertexAttribLFormat: This function only feeds attributes declared in GLSL as double or vectors of the same.

  • GL_DOUBLE: A 64-bit double-precision float value. Equivalent to GLdouble.

Here is a visual demonstration of the ordering of the 2_10_10_10_REV types:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              Z              |              Y              |               X            |
-----------------------------------------------------------------------------------------------

Here is a visual demonstration of the ordering of the 10F_11F_11F_REV type:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|             Z              |              Y                 |               X               |
-----------------------------------------------------------------------------------------------

D3D compatibility

D3D Compatible Format
Core in version 4.6
Core since version 3.2
Core ARB extension ARB_vertex_array_bgra

When using glVertexAttribPointer, and only this function (not the other forms), the size​ field can be a number 1-4, but it can also be GL_BGRA.

This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components.

This special mode is intended specifically for compatibility with certain Direct3D vertex formats. Because of that, this special size​ can only be used in conjunction with:

  • type​ must be GL_UNSIGNED_BYTE, GL_INT_2_10_10_10_REV​ or GL_UNSIGNED_INT_2_10_10_10_REV​
  • normalized​ must be GL_TRUE

So you cannot pass non-normalized values with this special size​.

Note: This special mode should only be used if you have data that is formatted in D3D's style and you need to use it in your GL application. Don't bother otherwise; you will gain no performance from it.

Here is a visual description:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              X              |              Y              |               Z            |
-----------------------------------------------------------------------------------------------

Notice how X comes second and the Z last. X is equivalent to R and Z is equivalent to B, so it comes in the reverse of BGRA order: ARGB.

Buffer bindings

The attribute format information describes to OpenGL the meaning of a conceptual C++ type declaration. The purpose of a buffer binding is to provide storage for an array of such conceptual declarations.

OpenGL provides a number of buffer bindings, indexed on the range [0, GL_MAX_VERTEX_ATTRIB_BINDINGS - 1]. Each binding acts as an array, with all of the information needed to convert an index into that array into an address in memory. Attribute formats define what a particular element in the array looks like, but the buffer binding explains how to get from one index in memory to the next.

Given a C++ declaration vec3 arr[20];, the expression arr[3] performs an array indexing operation. The compiler must perform the following steps:

  1. get the address of the array named by arr.
  2. calculate the size a single array element. This is the size of the vec3 data type.
  3. multiply the element size by the array index 3.
  4. Add the previous value to the address of the array.

These steps produce the memory address of arr[3] in C++. OpenGL vertex arrays work in an analogous fashion.

The "address" of an array in OpenGL is defined by Buffer Object and a byte offset into that buffer object. The offset allows multiple arrays to come from the same buffer object.

Because multiple attributes can come from the same buffer binding, a binding cannot compute the element size from the type information of its attribute(s). So instead, this size must be specified directly by the user.

These pieces of information for a buffer binding are specified by the following function:

void glBindVertexBuffer(GLuint bindingindex​, GLuint buffer​, GLintptr offset​, GLintptr stride​);

The bindingindex​ specifies which binding is being defined by this function. buffer​ is the buffer object to be used by this binding; providing a value of 0 means that no buffer is used (rendering while an array-enabled attribute that uses a buffer binding with a buffer object of 0 is an error). offset​ is the byte offset into buffer​ where the buffer binding's data starts. stride​ specifies the number of bytes between indices in the array.

As such, the formula for computing the address of any index is as simple as follows:

   addressof(buffer) + offset + (stride * index)

Vertex attribute offset

The above analogy works fine when we have a single attribute pulling data from a buffer binding. But consider this C++ array definition:

struct Vertex
{
	vec2   alpha;
	ivec4  beta;
};

Vertex arr[20];

So, how does C++ compute the address for something like arr[3].beta? It works almost exactly like above, but with one extra step:

  1. get the address of the array named by arr.
  2. calculate the size a single array element. This is the size of the Vertex data type.
  3. multiply the element size by the array index 3.
  4. Add the previous value to the address of the array.
  5. Add to the previous value the byte offset to beta.

Each sub-member of the data structure has a particular byte offset, relative to the definition of Vertex. In C++, alpha would have a byte offset of 0, while beta would have an offset of 8 (assuming vec2 contains 2 32-bit floats and no padding).

In OpenGL, the array arr is a buffer binding, and Vertex is its type. So the stride of the alignment would be the size of Vertex.

The equivalent of alpha and beta in OpenGL would be different attributes, which each have their own attribute data formats. Therefore, to complete the analogy, each attribute must have a byte offset which is used in conjunction with the buffer binding to find the address of that particular attribute's data.

This offset is provided by the relativeoffset​ parameter of the glVertexAttrib*Format function family.

Note that relativeoffset​ has much more strict limits than the buffer binding's offset​. The limit on relativeoffset​ is queried through GL_MAX_VERTEX_ATTRIB_RELATIVE_OFFSET, and is only guaranteed to be at least 2047 bytes. Also, note that relativeoffset​ is a GLuint (32-bits), while offset​ is a GLintptr, which is the size of the pointer (so 64-bits in a 64-bit build). So obviously the relativeoffset​ is a much more limited quantity.

2047 bytes is usually sufficient for dealing with struct-like vertex attributes.

Interleaved attributes

Doing what we did with putting individual attributes in a struct that is used as an array is called "interleaving" the attributes. Let us look at the code difference between the case of individual arrays and a single, interleaved array.

Here are the C++ definitions:

//Individual arrays
vec3 positions[VERTEX_COUNT];
vec3 normals[VERTEX_COUNT];
uvec4_byte colors[VERTEX_COUNT];
};

//Interleaved arrays
struct Vertex
{
  vec3 positions;
  vec3 normals;
  uvec4_byte colors;
};

Vertex vertices[VERTEX_COUNT];

The equivalent OpenGL definition for these would be as follows:

//Individual arrays
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexAttribBinding(0, 0);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, 0);
glVertexAttribBinding(1, 1);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0);
glVertexAttribBinding(2, 2);

glBindVertexBuffer(0, position_buff, 0, sizeof(positions[0]));
glBindVertexBuffer(1, normal_buff, 0, sizeof(normals[0]));
glBindVertexBuffer(2, color_buff, 0, sizeof(colors[0]));

//Interleaved arrays
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, offsetof(Vertex, positions));
glVertexAttribBinding(0, 0);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, offsetof(Vertex, normals));
glVertexAttribBinding(1, 0);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, offsetof(Vertex, colors));
glVertexAttribBinding(2, 0);

glBindVertexBuffer(0, buff, 0, sizeof(Vertex));

The macro offsetof computes the byte offset of the given field in the given struct. This is used as the relativeoffset​ of each attribute.

As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with those that change at the same time.

Index buffers

Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a Buffer Object bound to the GL_ELEMENT_ARRAY_BUFFER binding point.

Note: This binding is unlike other buffer bindings, because it is part of the VAO itself. That is, it isn't really "binding" the buffer to the context; you're attaching the buffer to the VAO. As such, if you call glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buff} and no VAO is bound, then an error results.

When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER, all drawing commands of the form gl*Draw*Elements* will use indexes from that buffer. Indices can be unsigned bytes, unsigned shorts, or unsigned ints. If no index buffer is bound, and you use an "Elements" rendering function, you get an error.

Instanced arrays

Instanced arrays
Core in version 4.6
Core since version 3.3
ARB extension ARB_instanced_arrays

Normally, vertex attribute arrays are indexed based on the index buffer, or when doing array rendering, once per vertex from the start point to the end. However, when doing instanced rendering, it is often useful to have an alternative means of getting per-instance data than accessing it directly in the shader via a Uniform Buffer Object, a Buffer Texture, or some other means.

It is possible to have one or more attribute arrays indexed, not by the index buffer or direct array access, but by the instance count. This is done via this function:

void glVertexAttribDivisor(GLuint index​, GLuint divisor​);

The index​ is the attribute index to set. If divisor​ is zero, then the attribute acts like normal, being indexed by the array or index buffer. If divisor​ is non-zero, then the current instance is divided by this divisor, and the result of that is used to access the attribute array.

The "current instance" mentioned above starts at the base instance for instanced rendering, increasing by 1 for each instance in the draw call. Note that this is not how the gl_InstanceID is computed for Vertex Shaders; that is not affected by the base instance. If no base instance is specified, then the current instance starts with 0.

This is generally considered the most efficient way of getting per-instance data to the vertex shader. However, it is also the most resource-constrained method in some respects. OpenGL implementations usually offer a fairly restricted number of vertex attributes (16 or so), and you will need some of these for the actual per-vertex data. So that leaves less room for your per-instance data. While the number of instances can be arbitrarily large (unlike UBO arrays), the amount of per-instance data is much smaller.

However, that should be plenty for a quaternion orientation and a position, for a simple transformation. That would even leave one float (the position only needs to be 3D) to provide a fragment shader an index to access an Array Texture.

Multibind and separation

Object Multi-bind
Core in version 4.6
Core since version 4.4
Core ARB extension ARB_multi_bind

It is often useful as a developer to maintain the same format while quickly switching between multiple buffers to pull data from. To achieve this, the following function is available:

void glBindVertexBuffers(GLuint first​, GLsizei count​, const GLuint *buffers​, const GLuintptr *offsets​, const GLsizei *strides​);

This function is mostly equivalent to calling glBindVertexBuffer (note the lack of the "s" at the end) on all count​ elements of the buffers​, offsets​, and strides​ arrays. Each time, the buffer binding index is incremented, starting at first​.

The difference between such a loop are as follows. buffers​ can be NULL; if it is, then the function will completely ignore offsets​ and strides​ as well. Instead, it will simply bind 0 to every buffer binding index specified by first​ and count​.

Matrix attributes

Attributes in GLSL can be of matrix types. However, our attribute binding functions only handle 1D vectors with up to 4 components. OpenGL solves this problem by converting matrix GLSL attributes into multiple sub-vectors, with each sub-vector having its own attribute index.

If you directly assign an attribute index to a matrix type, it implicitly takes up more than one attribute index. The number of attributes a matrix takes up is the number of columns of the matrix: a mat2 matrix will take 2, a mat2x4 matrix will take 2, while a mat4x2 will take 4. The number of components for each attribute is the number of rows of the matrix. So a mat4x2 will take 4 attribute locations and use 2 components in each attribute.

Each bound attribute in the VAO therefore fills in a single column, starting with the left-most column and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the matrix column 0, attribute 4 is column 1, and 5 is column 2.

OpenGL will allocate locations for matrix attributes contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.

Double-precision matrices (where available) will take up twice as much space per-component. So a dmat3x3 will take up 6 attribute indices, two indices for each column. Whereas a dmat3x2 will take up only 3 attribute indices, with one index per column.

Non-array attribute values

A vertex shader can read an attribute that is not currently enabled (via glEnableVertexAttribArray). The value that it gets is defined by special context state, which is *not* part of the VAO.

Because the attribute is defined by context state, it is constant over the course of a single draw call. Each attribute index has a separate value.

Warning: Every time you issue a drawing command with an array enabled, the corresponding context attribute values become undefined. So if you want to, for example, use the non-array attribute index 3 after previously using an array in index 3, you need to reset it to a known value.

The initial value for these is a floating-point (0.0, 0.0, 0.0, 1.0). Just as with array attribute values, non-array values are typed to float, integral, or double-precision (where available).

To change the value, you use a function of this form:

void glVertexAttrib*(GLuint index​, Type values​);

void glVertexAttribN*(GLuint index​, Type values​);

void glVertexAttribP*(GLuint index​, GLenum type​, GLboolean normalized​, Type values​);

void glVertexAttribI*(GLuint index​, Type values​);

void glVertexAttribL*(GLuint index​, Type values​);

The * is the type descriptor, using the traditional OpenGL syntax. The index​ is the attribute index to set. The Type is whatever type is appropriate for the * type specifier. If you set fewer than 4 of the values in the attribute, the rest will be filled in by (0, 0, 0, 1), as is the same with array attributes. And just as for attributes provided by arrays, double-precision inputs (GL 4.1 or ARB_vertex_attrib_64bit) that having more components than provided leaves the extra components with undefined values.

The N version of these functions provide values that are normalized, either signed or unsigned as per the function's type. The unadorned versions always assume integer values are not normalized. The P versions are for packed integer types, and they can be normalized or not. All three of these variants provide float attribute data, so they convert integers to floats.

To provide non-array integral values for integral attributes, use the I versions. For double-precision attributes (using the same rules for attribute index counts as double-precision arrays), use L.

Note that these non-array attribute values are not part of the VAO state; they are context state. Changes to them do not affect the VAO.

Note: It is not recommended that you use these. The performance characteristics of using fixed attribute data are unknown, and it is not a high-priority case that OpenGL driver developers optimize for. They might be faster than uniforms, or they might not.

Drawing

Once the VAO has been properly set up, the arrays of vertex data can be rendered as a Primitive. OpenGL provides innumerable different options for rendering vertex data.

See Also

Reference