User:Alfonse/Vertex Specification

From OpenGL Wiki
< User:Alfonse
Revision as of 00:34, 30 May 2018 by Alfonse (talk | contribs) (→‎Anatomy: Rearranged some things.)
Jump to navigation Jump to search

Vertex Specification is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.

Theory

Submitting vertex data for rendering requires creating a stream of vertices, and then telling OpenGL how to interpret that stream.

Vertex Stream

In order to render at all, you must be using a shader program or program pipeline which includes a Vertex Shader. The VS's user-defined input variables defines the list of expected Vertex Attributes for that shader. This set of attributes defines what values the vertex stream must provide to properly render with this shader.

For each attribute in the shader, you must provide an array of data for that attribute. All of these arrays must have the same number of elements. Note that these arrays are a bit more flexible than C arrays, but overall work the same way.

The order of vertices in the stream is very important; this order defines how OpenGL will process and render the Primitives the stream generates. There are two ways of rendering with arrays of vertices. You can generate a stream in the array's order, or you can use a list of indices to define the order. The indices control what order the vertices are received in, and indices can specify the same array element more than once.

Let's say you have the following as your array of 3d position data:

 { {1, 1, 1}, {0, 0, 0}, {0, 0, 1} }

If you simply use this as a stream as is, OpenGL will receive and process these three vertices in order (left-to-right). However, you can also specify a list of indices that will select which vertices to use and in which order.

Let's say we have the following index list:

 {2, 1, 0, 2, 1, 2}

If we render with the above attribute array, but selected by the index list, OpenGL will receive the following stream of vertex attribute data:

 { {0, 0, 1}, {0, 0, 0}, {1, 1, 1}, {0, 0, 1}, {0, 0, 0}, {0, 0, 1} }

The index list is a way of reordering the vertex attribute array data without having to actually change it. This is mostly useful as a means of data compression; in most tight meshes, vertices are used multiple times. Being able to store the vertex attributes for that vertex only once is very economical, as a vertex's attribute data is generally around 32 bytes, while indices are usually 2-4 bytes in size.

A vertex stream can of course have multiple attributes. You can take the above position array and augment it with, for example, a texture coordinate array:

 { {0, 0}, {0.5, 0}, {0, 1} }

The vertex stream you get will be as follows:

 { [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{1, 1, 1}, {0, 0}], [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{0, 0, 1}, {0, 1}] }
Note: Oftentimes, authoring tools will have attribute arrays, but each attribute array will have its own separate index array. This is done to make each attribute's array smaller. OpenGL (and Direct3D, if you're wondering) does not allow this. Only one index array can be used, and each attribute array is indexed with the same index. If your mesh data has multiple index arrays, you must convert the format exported by your authoring tool into the format described above.

Primitives

The above stream is not enough to actually draw anything; you must also tell OpenGL how to interpret this stream. And this means telling OpenGL what kind of primitive to interpret the stream as.

There are many ways for OpenGL to interpret a stream of, for example, 12 vertices. It can interpret the vertices as a sequence of triangles, points, or lines. It can even interpret these differently; it can interpret 12 vertices as 4 independent triangles (take every 3 verts as a triangle), as 10 dependent triangles (every group of 3 sequential vertices in the stream is a triangle), and so on.

The main article on Primitives has the details.

Vertex Array Object

Vertex Array Object
Core in version 4.6
Core since version 3.0
Core ARB extension ARB_vertex_array_object

A Vertex Array Object (VAO) is an OpenGL Object that stores all of the state needed to supply vertex data in arrays to a Vertex Rendering command. As a typical OpenGL object, one must bind the object before using it.

However, unlike most OpenGL objects, the functions that manipulate it aren't always named consistently. So it is not always easy to tell which functions manipulate a VAO and which do not.

Note: Every function mentioned in this page, unless explicitly stated, manipulates a VAO. And therefore, you must bind the VAO they are to affect before you can call these functions.

Anatomy

Anatomy of a Vertex Array Object
Diagram of the contents of a vertex array object

When rendering, OpenGL takes a number of vertex streams defined in arrays and reads the data from them, passing that data as attributes to the current Vertex Shader. To do this, OpenGL needs to understand the way data is stored in these arrays.

In C or C++, this would be easy. You have a variable declaration like this: vec4 attrib[20]. The compiler will use that to figure out how to convert code like attrib[10] into a specific address. So let us dissect the C++ array declaration, vec4 attrib[20].

This declaration has a name, attrib, which identifies that specific array. Not only does this array object have a name, by declaring it we are also giving it storage. The array has 20 elements in it, and each element is of the type vec4. Given this information, the compiler can determine the size of an element (based on the sizeof(vec4)). And because arrays in C/C++ are tightly packed, the compiler knows that attrib[X] is X * sizeof(vec4) bytes from the start of the array.

A vertex array object is like an array variable declaration, though obviously it is a lot more verbose.

VAO data is split into two distinct kinds of data: a sequence of attribute formats and a sequence of buffer binding points. The attribute format describes what a single element of the array's data looks like; it is the equivalent of describing what a type like vec4 means to the compiler.

The buffer binding point tells OpenGL about the array itself: where the storage for the array is, how many bytes are between elements in the array, and so forth.

Structurally speaking, think of a VAO like a set of C++ structs defined as follows:

struct AttributeFormat
{
  bool      arrayEnabled;
  BaseType  baseType;  //An enumeration of Float, Integer, and Double.
  GLuint    componentCount;
  GLenum    componentType; //An enumeration of OpenGL types.
  bool      normalized;
  GLuint    relativeOffset;
  GLuint    bufferBindingIndex;
};

struct BufferBinding
{
  GLuint    bufferObject
  GLintptr  baseOffset;
  GLintptr  stride;
  GLuint    instanceDivisor;
};

struct VertexArrayObject
{
  AttributeFormat attribs[GL_MAX_VERTEX_ATTRIBS];
  BufferBinding bindings[GL_MAX_VERTEX_ATTRIB_BINDINGS];
  GLuint indexBuffer;
};

Attribute format

For every Vertex Attribute, a VAO contains a corresponding format, indexed by the attribute index which it describes. Vertex attributes are indexed on the range [0, GL_MAX_VERTEX_ATTRIBS - 1]. All commands that manipulate the format for an attribute take a parameter named index​ or attribindex​ that specifies the attribute being modified.

Attribute formats tell OpenGL whether or not that attribute gets its data from an array at all. This is governed by the array enabled/disable state for the attribute. A newly created VAO sets all of its attribute formats to be disabled. Enabling an attribute for array access uses this function:

void glEnableVertexAttribArray(GLuint index​);

There is a similar glDisableVertexAttribArray function to disable an attribute array. Initially, all attributes for a VAO are disabled.

If an attribute's array is disabled, then the rest of the state for the attribute format is irrelevant.

Each enabled attribute array gets its data from a source buffer. Because multiple attributes can read data from the same buffer, this information is specified by as an index into the array of buffer binding points. Each attribute format specifies which buffer binding point provides the storage for the array.

By default, the buffer binding index for an attribute is the same as the attribute index. With OpenGL 4.3 or ARB_vertex_attrib_binding, the buffer buffer binding index that provides storage for an attribute is specified with:

void glVertexAttribBinding(GLuint attribindex​, GLuint bindingindex​);

bindingindex​ is the index of the buffer binding point that provides the storage for the attribute array specified by attribindex​.

While each attribute format can only source data from a single buffer binding point, multiple buffer binding points can be used by different attributes. This allows multiple arrays to live in the same memory, so as to provide the ability to have arrays of larger data structures.

Without OpenGL 4.3 or ARB_vertex_attrib_binding, the buffer binding index is always the same as the attribute index.

Vertex format

The previous format information are equivalent to telling C++ that there is a variable named attrib and that it is an array that has storage. The rest of the information in an attribute format describes that element's data in the format.

The format parameters describe how to interpret a single vertex of information from the array. Vertex Shader input variables can be declared as a single-precision floating-point GLSL type (such as float or vec4), an integral type (such as uint or ivec3), or a double-precision type (such as double or dvec4). Double-precision attributes are only available in OpenGL 4.1 or ARB_vertex_attrib_64bit.

These three basic types correspond to the three functions used to define the rest of the vertex format's data:

void glVertexAttribFormat(GLuint attribindex​, GLint size​, GLenum type​, GLboolean normalized​, GLuint relativeoffset​);

void glVertexAttribIFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);

void glVertexAttribLFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);

These three functions, in addition to the other parameters of these functions, each set the basic type of the attribute: float, integer, or double, respectively. So if you call glVertexAttribIFormat, then you have set the basic type for attribindex​ to integer. If the vertex shader input variable with the attribindex​ attribute location does not match the basic type set by glVertexAttrib*Format, then the value of the attribute in the VS will be undefined when you render.

The meaning of the relativeoffset​ parameter will be discussed in the section on buffer bindings. While it is a format parameter, it means a great deal to how the attribute gets its data from the buffer it is associated with.

Each individual attribute index represents a single vector, which has 1 to 4 components. The size​ parameter of the glVertexAttrib*Format functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4.

Note that size​ does not have to exactly match the size used by the vertex shader. If the vertex shader reads fewer components than the attribute provides, then the extras are simply ignored (they will likely be read and processed from memory, but those values won't be used in the VS). If the vertex shader has more components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the missing XYZW components.

The latter is not true for double-precision inputs (OpenGL 4.1 or ARB_vertex_attrib_64bit). If the shader attribute has more components than the provided value, the extra components will have undefined values.

Also, VS inputs can use the component layout qualifier, to allow multiple VS input variables to read from the same attribute index. In this case, all input variables that use the same attribute index must also share the same basic type (float, integer, or double).

Component type

The type of the vector component in the buffer object is given by the type​ and normalized​ parameters, where applicable. This type will be converted into the actual type used by the vertex shader. The different glVertexAttrib*Format functions take different type​s. Here is a list of the types and their meanings for each function:

glVertexAttribFormat:

  • Floating-point types. normalized​ must be GL_FALSE
    • GL_HALF_FLOAT​: A 16-bit half-precision floating-point value. Equivalent to GLhalf.
    • GL_FLOAT​: A 32-bit single-precision floating-point value. Equivalent to GLfloat.
    • GL_DOUBLE​: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to GLdouble.
    • GL_FIXED​: A 16.16-bit fixed-point two's complement value. Equivalent to GLfixed.
  • Integer types; these are converted to floats automatically. If normalized​ is GL_TRUE, then the value will be converted to a float via integer normalization (an unsigned byte value of 255 becomes 1.0f). If normalized​ is GL_FALSE, it will be converted directly to a float as if by C-style casting (255 becomes 255.0f, regardless of the size of the integer).
    • GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte.
    • GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte.
    • GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort.
    • GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort.
    • GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint.
    • GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint.
    • GL_INT_2_10_10_10_REV​: A series of four values packed in a 32-bit unsigned integer. Each individual packed value is a two's complement signed integer, but the overall bitfield is unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
    • GL_UNSIGNED_INT_2_10_10_10_REV: A series of four values packed in a 32-bit unsigned integer. The packed values are unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
    • GL_UNSIGNED_INT_10F_11F_11F_REV: Requires OpenGL 4.4 or ARB_vertex_type_10f_11f_11f_rev. This represents a 3-element vector of floats, packed into a 32-bit unsigned integer. The bitdepth for the packed fields is 10, 11, 11, but in reverse order. So the lowest 11 bits are the first component, the next 11 are the second, and the last 10 are the third. These floats are the low bitdepth floats, packed exactly like the image format GL_R11F_G11F_B10F. If you use this, the size​ must be 3.

glVertexAttribIFormat: This function only feeds attributes declared in GLSL as signed or unsigned integers, or vectors of the same.

  • GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte.
  • GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte.
  • GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort.
  • GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort.
  • GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint.
  • GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint.

glVertexAttribLFormat: This function only feeds attributes declared in GLSL as double or vectors of the same.

  • GL_DOUBLE: A 64-bit double-precision float value. Equivalent to GLdouble.

Here is a visual demonstration of the ordering of the 2_10_10_10_REV types:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              Z              |              Y              |               X            |
-----------------------------------------------------------------------------------------------

Here is a visual demonstration of the ordering of the 10F_11F_11F_REV type:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|             Z              |              Y                 |               X               |
-----------------------------------------------------------------------------------------------

D3D compatibility

D3D Compatible Format
Core in version 4.6
Core since version 3.2
Core ARB extension ARB_vertex_array_bgra

When using vertex formats that have a single-precision floating-point basic type (ie: not integer or double) the size​ field can be the enumerator GL_BGRA, in addition to the numbers 1-4.

This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components.

This special mode is intended specifically for compatibility with certain Direct3D vertex formats. Because of that, this special size​ can only be used in conjunction with:

  • type​ must be GL_UNSIGNED_BYTE, GL_INT_2_10_10_10_REV​ or GL_UNSIGNED_INT_2_10_10_10_REV​
  • normalized​ must be GL_TRUE

So you cannot pass non-normalized values with this special size​.

Note: This special mode should only be used if you have data that is formatted in D3D's style and you need to use it in your GL application. Don't bother otherwise; you will gain no performance from it.

Here is a visual description:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              X              |              Y              |               Z            |
-----------------------------------------------------------------------------------------------

Notice how X comes second and the Z last. X is equivalent to R and Z is equivalent to B, so it comes in the reverse of BGRA order: ARGB.

Buffer bindings

The attribute format information describes to OpenGL the meaning of a conceptual C++ type declaration. The purpose of a buffer binding is to provide storage for an array of such conceptual declarations.

OpenGL provides a number of buffer bindings, indexed on the range [0, GL_MAX_VERTEX_ATTRIB_BINDINGS - 1]. Each binding acts as an array, with all of the information needed to convert an index into that array into an address in memory. Attribute formats define what a particular element in the array looks like, but the buffer binding explains how to get from one index in memory to the next.

Given a C++ declaration vec3 arr[20];, the expression arr[3] performs an array indexing operation. This produces the address of a specific vec3 at the given array index. To evaluate this expression, the compiler must perform the following steps:

  1. get the address of the array named by arr.
  2. calculate the size a single array element. This is the size of the vec3 data type.
  3. multiply the element size by the array index 3.
  4. Add the previous value to the address of the array.

These steps produce the memory address of arr[3] in C++. OpenGL vertex arrays need to do the same kind of work in order for OpenGL to read the data from an array.

The "address" of an array in OpenGL is defined by Buffer Object and a byte offset into that buffer object. The purpose of the byte offset is to allow multiple arrays to come from the same buffer object, or just to allow the same buffer to store different kinds of information.

A single attribute has enough information to compute the size of an element of its type, much like sizeof(vec3). In OpenGL however, multiple attributes can read from the same buffer binding. As such, a binding cannot compute the element size from the type information of its attribute(s). So instead, this size must be specified directly by the user when establishing the buffer binding.

These pieces of information for a buffer binding are specified by the following function:

void glBindVertexBuffer(GLuint bindingindex​, GLuint buffer​, GLintptr offset​, GLintptr stride​);

The bindingindex​ specifies which binding is being defined by this function. buffer​ is the buffer object to be used by this binding; providing a value of 0 means that no buffer is used (rendering while an array-enabled attribute that uses a buffer binding with a buffer object of 0 is an error). offset​ is the byte offset into buffer​ where the buffer binding's data starts. stride​ specifies the number of bytes between indices in the array.

As such, the formula for computing the address of any index is as simple as follows:

   addressof(buffer) + offset + (stride * index)

The above address computation works fine when we have a single attribute pulling data from a buffer binding. But consider this C++ array definition:

struct Vertex
{
	vec2   alpha;
	ivec4  beta;
};

Vertex arr[20];

So, how does C++ compute the address for arr[3].beta? It works almost exactly like above, but with one extra step:

  1. get the address of the array named by arr.
  2. calculate the size a single array element. This is the size of the Vertex data type.
  3. multiply the element size by the array index 3.
  4. Add the previous value to the address of the array.
  5. Add to the previous value the byte offset from the beginning of a Vertex to its beta member.

Each sub-member of the data structure has a particular byte offset, relative to the definition of Vertex. In C++, alpha would have a byte offset of 0, while beta would have an offset of 8 (assuming vec2 contains 2 32-bit floats and no padding).

In OpenGL, the array arr is a buffer binding, and Vertex is its type. So the stride of the array would be the size of Vertex.

The equivalent of the members alpha and beta in OpenGL would be different attributes. To complete the analogy, we need each attribute to have a byte offset which is used in conjunction with the buffer binding to find the address of that particular attribute's data.

This offset is provided by the relativeoffset​ parameter of the glVertexAttrib*Format functions. So the full formula for computing the address of a specific attribute based on the index in an array is as follows:

   addressof(buffer) + offset + (stride * index) + relativeoffset

Note that relativeoffset​ has much more strict limits than the buffer binding's offset​. The limit on relativeoffset​ is queried through GL_MAX_VERTEX_ATTRIB_RELATIVE_OFFSET, and is only guaranteed to be at least 2047 bytes. Also, note that relativeoffset​ is a GLuint (32-bits), while offset​ is a GLintptr, which is the size of the pointer (so 64-bits in a 64-bit build). So obviously the relativeoffset​ is a much more limited quantity.

2047 bytes is usually sufficient for dealing with struct-like vertex attributes.

Interleaved attributes

Doing what we did with putting individual attributes in a struct that is used as an array is called "interleaving" the attributes. Let us look at the code difference between the case of individual arrays and a single, interleaved array.

Here are the C++ definitions:

//Individual arrays
vec3 positions[VERTEX_COUNT];
vec3 normals[VERTEX_COUNT];
uvec4_byte colors[VERTEX_COUNT];
};

//Interleaved arrays
struct Vertex
{
  vec3 positions;
  vec3 normals;
  uvec4_byte colors;
};

Vertex vertices[VERTEX_COUNT];

The equivalent OpenGL definition for these would be as follows:

//Individual arrays
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexAttribBinding(0, 0);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, 0);
glVertexAttribBinding(1, 1);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0);
glVertexAttribBinding(2, 2);

glBindVertexBuffer(0, position_buff, 0, sizeof(positions[0]));
glBindVertexBuffer(1, normal_buff, 0, sizeof(normals[0]));
glBindVertexBuffer(2, color_buff, 0, sizeof(colors[0]));

//Interleaved arrays
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, offsetof(Vertex, positions));
glVertexAttribBinding(0, 0);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, offsetof(Vertex, normals));
glVertexAttribBinding(1, 0);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, offsetof(Vertex, colors));
glVertexAttribBinding(2, 0);

glBindVertexBuffer(0, buff, 0, sizeof(Vertex));

The macro offsetof computes the byte offset of the given field in the given struct. This is used as the relativeoffset​ of each attribute.

As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with those that change at the same time.

Multibind and separation

Object Multi-bind
Core in version 4.6
Core since version 4.4
Core ARB extension ARB_multi_bind

It is often useful as a developer to maintain the same format while quickly switching between multiple buffers to pull data from. To achieve this, the following function is available:

void glBindVertexBuffers(GLuint first​, GLsizei count​, const GLuint *buffers​, const GLuintptr *offsets​, const GLsizei *strides​);

buffers​, offsets​, and strides​ are arrays of count​ buffers, base offsets, and strides. So this function is mostly equivalent to calling glBindVertexBuffer (note the lack of the "s" at the end) on all count​ elements of the buffers​, offsets​, and strides​ arrays. On each call, the buffer binding index is incremented, starting at first​.

There is one difference. If buffers​ is NULL, then the function will completely ignore offsets​ and strides​. It will simply bind 0 to every buffer binding index specified by first​ and count​. This provides an effective means for unattaching a number of buffers from the VAO.

Index buffers

Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a Buffer Object bound to the GL_ELEMENT_ARRAY_BUFFER binding point.

Note: This binding is unlike other buffer bindings, because it is part of the VAO itself. That is, it isn't really "binding" the buffer to the context; you're attaching the buffer to the VAO. As such, if you call glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buff} and no VAO is bound, then an error results.

When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER, all drawing commands of the form gl*Draw*Elements* will use indexes from that buffer. Indices can be unsigned bytes, unsigned shorts, or unsigned ints. If no index buffer is bound, and you use an "Elements" rendering function, you get an error.

Instanced arrays

Instanced arrays
Core in version 4.6
Core since version 3.3
ARB extension ARB_instanced_arrays

Instanced rendering is the ability to render multiple objects using the same vertex data. Successful uses of instanced rendering require a mechanism to tell the shader which mesh instance the shader invocation is being executed on. This allows the user to provide things like transformation matrices, so that the individual objects in the instanced rendering command can appear in different places.

The typical way to provide this per-instance information is to use gl_InstanceID, which is a special vertex shader input. This index would be used to fetch data from a Uniform Buffer Object, Buffer Texture, or some other bulk storage mechanism.

An alternative way to get per-instance information to the shader is to change how vertex arrays are indexed. Normally, vertex attribute arrays are indexed based on values from the index buffer, or when doing array rendering once per vertex from the start point to the end.

Instance arrays are a mechanism to allow a buffer binding to designate that it is being indexed, not by the the vertex index, but by the instance index. This means all attributes that use that buffer binding will provide the same input value for each instance, changing values only when a new instance is selected. This is established by this function:

void glVertexBindingDivisor(GLuint bindingindex​, GLuint divisor​);

The divisor​ determines how a particular buffer binding is indexed (and therefore how all attribute arrays that use this binding are indexed). If divisor​ is zero, then the binding is indexed once per vertex. divisor​ is non-zero, then the current instance is divided by this divisor (dropping any fractions), and the result of that is used to access the binding array.

The "current instance" mentioned above starts at the base instance specified by the instanced rendering command, increasing by 1 for each instance in the draw call. Note that this is not how the gl_InstanceID is computed for Vertex Shaders; that is not affected by the base instance. If no base instance is specified by the rendering command, then the current instance starts with 0.

Instance arrays are generally considered the most efficient way of getting per-instance data to the vertex shader. However, it is also the most resource-constrained method in some respects. OpenGL implementations usually offer a fairly restricted number of vertex attributes (16 or so), and you will need some of these for the actual per-vertex data. So that leaves less room for your per-instance data. While the number of instances can be arbitrarily large (unlike UBO arrays), the amount of per-instance data is much smaller.

However, even 16 attributes that should be plenty of space for a quaternion orientation and a position, sufficient for a simple transformation. That even leaves one float (the position only needs to be 3D), which can be used to provide a fragment shader with an index to access an Array Texture.

Matrix attributes

Vertex Shader inputs in GLSL can be of matrix types. However, attribute binding functions only handle 1D vectors with up to 4 components. OpenGL solves this problem by converting matrix inputs into multiple sub-vectors, with each sub-vector having its own attribute index.

If you directly assign an attribute index to a matrix type, the variable will take up more than one attribute index. The number of attributes a matrix takes up is the number of columns of the matrix: a mat2 matrix will take 2, a mat2x4 matrix will take 2, while a mat4x2 will take 4. The number of components for each attribute is the number of rows of the matrix. So a mat4x2 will take 4 attribute locations and use 2 components in each attribute.

Each bound attribute in the VAO therefore fills in a single column, starting with the left-most column and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the matrix column 0, attribute 4 is column 1, and 5 is column 2.

OpenGL will allocate attribute locations for matrix inputs contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.

Double-precision matrices (where available) will take up twice as much space per-component. So a dmat3x3 will take up 6 attribute indices, two indices for each column. Whereas a dmat3x2 will take up only 3 attribute indices, with one index per column.

Non-array attribute values

A vertex shader can read an attribute that is not currently enabled (via glEnableVertexAttribArray). The value that it gets is defined by special context state, which is *not* part of the VAO.

Because the attribute is defined by context state, it is constant over the course of a single draw call. Each attribute index has a separate value.

Warning: Every time you issue a drawing command with an array enabled, the corresponding context attribute values become undefined. So if you want to, for example, use the non-array attribute index 3 after previously using an array in index 3, you need to reset it to a known value.

The initial value for these is a floating-point (0.0, 0.0, 0.0, 1.0). Just as with array attribute values, non-array values are typed to float, integral, or double-precision (where available).

To change the value, you use a function of this form:

void glVertexAttrib*(GLuint index​, Type values​);

void glVertexAttribN*(GLuint index​, Type values​);

void glVertexAttribP*(GLuint index​, GLenum type​, GLboolean normalized​, Type values​);

void glVertexAttribI*(GLuint index​, Type values​);

void glVertexAttribL*(GLuint index​, Type values​);

The * is the type descriptor, using the traditional OpenGL syntax. The index​ is the attribute index to set. The Type is whatever type is appropriate for the * type specifier. If you set fewer than 4 of the values in the attribute, the rest will be filled in by (0, 0, 0, 1), as is the same with array attributes. And just as for attributes provided by arrays, double-precision inputs (GL 4.1 or ARB_vertex_attrib_64bit) that having more components than provided leaves the extra components with undefined values.

The N version of these functions provide values that are normalized, either signed or unsigned as per the function's type. The unadorned versions always assume integer values are not normalized. The P versions are for packed integer types, and they can be normalized or not. All three of these variants provide float attribute data, so they convert integers to floats.

To provide non-array integral values for integral attributes, use the I versions. For double-precision attributes (using the same rules for attribute index counts as double-precision arrays), use L.

Note that these non-array attribute values are not part of the VAO state; they are context state. Changes to them do not affect the VAO.

Note: It is not recommended that you use these. The performance characteristics of using fixed attribute data are unknown, and it is not a high-priority case that OpenGL driver developers optimize for. They might be faster than uniforms, or they might not.

Drawing

Once the VAO has been properly set up, the arrays of vertex data can be rendered as a Primitive. OpenGL provides innumerable different options for rendering vertex data.

See Also

Reference