User:Alfonse/Vertex Specification: Difference between revisions
(→Anatomy: Matching ordering.) |
m (Fixing redlinks.) |
||
(16 intermediate revisions by the same user not shown) | |||
Line 68: | Line 68: | ||
When rendering, OpenGL takes a number of vertex streams defined in arrays and reads the data from them, passing that data as attributes to the current [[Vertex Shader]]. To do this, OpenGL needs to understand the way data is stored in these arrays. | When rendering, OpenGL takes a number of vertex streams defined in arrays and reads the data from them, passing that data as attributes to the current [[Vertex Shader]]. To do this, OpenGL needs to understand the way data is stored in these arrays. | ||
In C or C++, this would be easy. | In C or C++, this would be easy. You have a variable declaration like this: {{code|vec4 attrib[20]}}. The compiler will use that to figure out how to convert code like {{code|attrib[10]}} into a specific address. So let us dissect the C++ array declaration, {{code|vec4 attrib[20]}}. | ||
This declaration has a name, {{code|attrib}}, which identifies that specific array. Not only does this array object have a name, by declaring it we are also giving it storage. The array has 20 elements in it, and each element is of the type {{code|vec4}}. Given this information, the compiler can determine the size of an element (based on the {{code|sizeof(vec4)}}). And because arrays in C/C++ are tightly packed, the compiler knows that {{code|attrib[X]}} is {{code|X * sizeof(vec4)}} bytes from the start of the array. | |||
A vertex array object is like an array variable declaration, though obviously it is a lot more verbose. Also, VAOs contain the specification for potentially multiple arrays of vertex data. | |||
VAO data is split into two distinct kinds of data: a sequence of vertex formats and a sequence of buffer binding points. The vertex format describes what a single element of a particular array's data looks like; it is the equivalent of describing what a type like {{code|vec4}} means to the compiler. | |||
VAO data is split into two distinct kinds of data: a sequence of | |||
The buffer binding point tells OpenGL about the array itself: where the storage for the array is, how many bytes are between elements in the array, and so forth. | The buffer binding point tells OpenGL about the array itself: where the storage for the array is, how many bytes are between elements in the array, and so forth. | ||
Line 83: | Line 81: | ||
<source lang="cpp"> | <source lang="cpp"> | ||
struct | struct VertexFormat | ||
{ | { | ||
bool arrayEnabled; | bool arrayEnabled; | ||
Line 110: | Line 108: | ||
</source> | </source> | ||
== Attribute | == Attribute arrays == | ||
For every [[Vertex Attribute]], a VAO contains a corresponding format, indexed by the attribute index which it describes. Vertex attributes are indexed on the range [0, {{enum|GL_MAX_VERTEX_ATTRIBS}} - 1]. All commands that manipulate the format for an attribute take a parameter named {{param|index}} or {{param|attribindex}} that specifies the attribute being modified. | For every [[Vertex Attribute]], a VAO contains a corresponding vertex format, indexed by the attribute index which it describes. Vertex attributes are indexed on the range [0, {{enum|GL_MAX_VERTEX_ATTRIBS}} - 1]. All commands that manipulate the format for an attribute take a parameter named {{param|index}} or {{param|attribindex}} that specifies the attribute being modified. | ||
Vertex formats tell OpenGL whether or not that attribute gets its data from an array at all. This is governed by the array enabled/disable state for the attribute. A newly created VAO sets all of its vertex formats to be disabled. Enabling an attribute for array access uses this function: | |||
{{funcdef|void {{apifunc|glEnableVertexAttribArray}}(GLuint {{param|index}});}} | {{funcdef|void {{apifunc|glEnableVertexAttribArray}}(GLuint {{param|index}});}} | ||
Line 120: | Line 118: | ||
There is a similar {{apifunc|glDisableVertexAttribArray}} function to disable an attribute array. Initially, all attributes for a VAO are disabled. | There is a similar {{apifunc|glDisableVertexAttribArray}} function to disable an attribute array. Initially, all attributes for a VAO are disabled. | ||
If an attribute's array is disabled, then the rest of the state for the | If an attribute's array is disabled, then the rest of the state for the vertex format is irrelevant. | ||
Each enabled attribute array gets its data from a source buffer. Because multiple attributes can read data from the same buffer, this information is specified by as an index into the array of [[#Buffer_binding_points|buffer binding points]]. Each | Each enabled attribute array gets its data from a source buffer. Because multiple attributes can read data from the same buffer, this information is specified by as an index into the array of [[#Buffer_binding_points|buffer binding points]]. Each vertex format specifies which buffer binding point provides the storage for the array. | ||
By default, the buffer binding index for an attribute is the same as the attribute index. With | By default, the buffer binding index for an attribute is the same as the attribute index. With {{require|4.3|vertex_attrib_binding}}, the buffer buffer binding index that provides storage for an attribute is specified with: | ||
{{funcdef|void {{apifunc|glVertexAttribBinding}}(GLuint {{param|attribindex}}, GLuint {{param|bindingindex}});}} | {{funcdef|void {{apifunc|glVertexAttribBinding}}(GLuint {{param|attribindex}}, GLuint {{param|bindingindex}});}} | ||
{{param|bindingindex}} is the index of the buffer binding point that provides the storage for the attribute | {{param|bindingindex}} is the index of the buffer binding point that provides the storage for the attribute array specified by {{param|attribindex}}. | ||
While each | While each vertex format can only source data from a single buffer binding point, multiple buffer binding points can be used by different attributes. This allows multiple arrays to live in the same memory, so as to provide the ability to have [[#Interleaved attributes|arrays of larger data structures]]. | ||
Without {{require|4.3|vertex_attrib_binding}}, the buffer binding index is always the same as the attribute index. | Without {{require|4.3|vertex_attrib_binding}}, the buffer binding index is always the same as the attribute index. | ||
Line 136: | Line 134: | ||
== Vertex format == | == Vertex format == | ||
The previous format information are equivalent to telling C++ that there is a variable named {{code|attrib}} and that it is an array that has storage. The rest of the information in an | The previous format information are equivalent to telling C++ that there is a variable named {{code|attrib}} and that it is an array that has storage. The rest of the information in an vertex format describes that element's data in the format. | ||
The format parameters describe how to interpret a single vertex of information from the array. [[Vertex Shader]] input variables can be declared as a single-precision floating-point GLSL type (such as {{code|float}} or {{code|vec4}}), an integral type (such as {{code|uint}} or {{code|ivec3}}), or a double-precision type (such as {{code|double}} or {{code|dvec4}}). Double-precision attributes are only available in {{require|4.1|vertex_attrib_64bit}}. | The format parameters describe how to interpret a single vertex of information from the array. [[Vertex Shader]] input variables can be declared as a single-precision floating-point GLSL type (such as {{code|float}} or {{code|vec4}}), an integral type (such as {{code|uint}} or {{code|ivec3}}), or a double-precision type (such as {{code|double}} or {{code|dvec4}}). Double-precision attributes are only available in {{require|4.1|vertex_attrib_64bit}}. | ||
These three | These three basic types correspond to the three functions used to define the rest of the vertex format's data: | ||
{{funcdef|void {{apifunc|glVertexAttribFormat}}(GLuint {{param|attribindex}}, GLint {{param|size}}, GLenum {{param|type}}, GLboolean {{param|normalized}}, GLuint {{param|relativeoffset}}); | {{funcdef|void {{apifunc|glVertexAttribFormat}}(GLuint {{param|attribindex}}, GLint {{param|size}}, GLenum {{param|type}}, GLboolean {{param|normalized}}, GLuint {{param|relativeoffset}}); | ||
Line 148: | Line 146: | ||
void {{apifunc|glVertexAttribLFormat}}(GLuint {{param|attribindex}}, GLint {{param|size}}, GLenum {{param|type}}, GLuint {{param|relativeoffset}});}} | void {{apifunc|glVertexAttribLFormat}}(GLuint {{param|attribindex}}, GLint {{param|size}}, GLenum {{param|type}}, GLuint {{param|relativeoffset}});}} | ||
These three functions, in addition to the other parameters of these functions, set the basic type of the attribute: float, integer, or double, respectively. If the vertex shader input variable with the {{param|attribindex}} attribute location does not match the basic type set by {{code|glVertexAttrib*Format}}, then the value of the attribute will be undefined when you render. | These three functions, in addition to the other parameters of these functions, each set the basic type of the attribute: float, integer, or double, respectively. So if you call {{apifunc|glVertexAttribIFormat}}, then you have set the basic type for {{param|attribindex}} to integer. If the vertex shader input variable with the {{param|attribindex}} attribute location does not match the basic type set by {{code|glVertexAttrib*Format}}, then the value of the attribute in the VS will be undefined when you render. | ||
The meaning of the {{param|relativeoffset}} parameter will be discussed in the [[#Buffer bindings|section on buffer bindings]]. While it is a format parameter, it means a great deal to how the attribute gets its data from the buffer it is associated with. | The meaning of the {{param|relativeoffset}} parameter will be discussed in the [[#Buffer bindings|section on buffer bindings]]. While it is a format parameter, it means a great deal to how the attribute gets its data from the buffer it is associated with. | ||
Each individual attribute index | Each individual attribute index represents a single vector, which has 1 to 4 components. The {{param|size}} parameter of the {{code|glVertexAttrib*Format}} functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4. | ||
Note that {{param|size}} does not have to exactly match the size used by the vertex shader. If the vertex shader | Note that {{param|size}} does not have to exactly match the size used by the vertex shader. If the vertex shader reads fewer components than the attribute provides, then the extras are simply ignored (they will likely be read and processed from memory, but those values won't be used in the VS). If the vertex shader has ''more'' components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the missing XYZW components. | ||
The latter is not true for double-precision inputs ({{require|4.1|vertex_attrib_64bit}}). If the shader attribute has more components than the provided value, the extra components will have undefined values. | The latter is not true for double-precision inputs ({{require|4.1|vertex_attrib_64bit}}). If the shader attribute has more components than the provided value, the extra components will have undefined values. | ||
Also, VS inputs can use the [[Layout Component|{{code|component}} layout qualifier]], to allow multiple VS input variables to read from the same attribute index. In this case, all input variables that use the same attribute index must also share the same basic type (float, integer, or double). | |||
=== Component type === | === Component type === | ||
Line 164: | Line 164: | ||
{{apifunc|glVertexAttribFormat}}: | {{apifunc|glVertexAttribFormat}}: | ||
* Floating-point types. {{param|normalized}} must be {{enum|GL_FALSE}} | * Floating-point types. {{param|normalized}} must be {{enum|GL_FALSE}} | ||
** {{enum|GL_HALF_FLOAT}}: A [[Half | ** {{enum|GL_HALF_FLOAT}}: A [[Half float|16-bit half-precision floating-point value]]. Equivalent to {{code|GLhalf}}. | ||
** {{enum|GL_FLOAT}}: A 32-bit single-precision floating-point value. Equivalent to {{code|GLfloat}}. | ** {{enum|GL_FLOAT}}: A 32-bit single-precision floating-point value. Equivalent to {{code|GLfloat}}. | ||
** {{enum|GL_DOUBLE}}: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to {{code|GLdouble}}. | ** {{enum|GL_DOUBLE}}: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to {{code|GLdouble}}. | ||
Line 210: | Line 210: | ||
}} | }} | ||
When using | When using vertex formats that have a single-precision floating-point basic type (ie: not integer or double) the {{param|size}} field can be the enumerator {{enum|GL_BGRA}}, in addition to the numbers 1-4. | ||
This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components. | This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components. | ||
Line 233: | Line 233: | ||
== Buffer bindings == | == Buffer bindings == | ||
The | The vertex format information describes to OpenGL the meaning of a conceptual C++ type declaration. The purpose of a buffer binding is to provide storage for an array of such conceptual declarations. | ||
OpenGL provides a number of buffer bindings, indexed on the range [0, {{enum|GL_MAX_VERTEX_ATTRIB_BINDINGS}} - 1]. Each binding acts as an array, with all of the information needed to convert an index into that array into an address in memory. | OpenGL provides a number of buffer bindings, indexed on the range [0, {{enum|GL_MAX_VERTEX_ATTRIB_BINDINGS}} - 1]. Each binding acts as an array, with all of the information needed to convert an index into that array into an address in memory. Vertex formats define what a particular element in the array looks like, but the buffer binding explains how to get from one index in memory to the next. | ||
Given a C++ declaration {{code|vec3 arr[20];}}, the expression {{code|arr[3]}} performs an array indexing operation. This produces the address of a specific {{code|vec3}} at the given array index. To evaluate this expression, the compiler must perform the following steps: | Given a C++ declaration {{code|vec3 arr[20];}}, the expression {{code|arr[3]}} performs an array indexing operation. This produces the address of a specific {{code|vec3}} at the given array index. To evaluate this expression, the compiler must perform the following steps: | ||
Line 248: | Line 248: | ||
The "address" of an array in OpenGL is defined by [[Buffer Object]] and a byte offset into that buffer object. The purpose of the byte offset is to allow multiple arrays to come from the same buffer object, or just to allow the same buffer to store different kinds of information. | The "address" of an array in OpenGL is defined by [[Buffer Object]] and a byte offset into that buffer object. The purpose of the byte offset is to allow multiple arrays to come from the same buffer object, or just to allow the same buffer to store different kinds of information. | ||
A single attribute has enough information to compute the size of an element of its type, much like {{code|sizeof(vec3)}}. | A single attribute has enough information to compute the size of an element of its type, much like {{code|sizeof(vec3)}}. In OpenGL however, multiple attributes can read from the same buffer binding. As such, a binding cannot compute the element size from the type information of its attribute(s). So instead, this size must be specified directly by the user when establishing the buffer binding. | ||
These pieces of information for a buffer binding are specified by the following function: | These pieces of information for a buffer binding are specified by the following function: | ||
Line 347: | Line 347: | ||
As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with those that change at the same time. | As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with those that change at the same time. | ||
=== Multibind and separation === | |||
{{infobox feature | |||
| name = Object Multi-bind | |||
| core = 4.4 | |||
| core_extension = {{extref|multi_bind}} | |||
}} | |||
It is often useful as a developer to maintain the same format while quickly switching between multiple buffers to pull data from. To achieve this, the following function is available: | |||
{{funcdef|void {{apifunc|glBindVertexBuffers}}(GLuint {{param|first}}, GLsizei {{param|count}}, const GLuint *{{param|buffers}}, const GLuintptr *{{param|offsets}}, const GLsizei *{{param|strides}});}} | |||
{{param|buffers}}, {{param|offsets}}, and {{param|strides}} are arrays of {{param|count}} buffers, base offsets, and strides. So this function is ''mostly'' equivalent to calling {{apifunc|glBindVertexBuffer}} (note the lack of the "s" at the end) on all {{param|count}} elements of the {{param|buffers}}, {{param|offsets}}, and {{param|strides}} arrays. On each call, the buffer binding index is incremented, starting at {{param|first}}. | |||
There is one difference. If {{param|buffers}} is NULL, then the function will completely ignore {{param|offsets}} and {{param|strides}}. It will simply bind 0 to every buffer binding index specified by {{param|first}} and {{param|count}}. This provides an effective means for unattaching a number of buffers from the VAO. | |||
== Index buffers == | == Index buffers == | ||
Line 363: | Line 378: | ||
}} | }} | ||
Normally, vertex attribute arrays are indexed based on the index buffer, or when doing array rendering | [[Instancing|Instanced rendering]] is the ability to render multiple objects using the same vertex data. Successful uses of instanced rendering require a mechanism to tell the shader which mesh instance the shader invocation is being executed on. This allows the user to provide things like transformation matrices, so that the individual objects in the instanced rendering command can appear in different places. | ||
The typical way to provide this per-instance information is to use {{code|gl_InstanceID}}, which is a special vertex shader input. This index would be used to fetch data from a [[Uniform Buffer Object]], [[Buffer Texture]], or some other bulk storage mechanism. | |||
An alternative way to get per-instance information to the shader is to change how vertex arrays are indexed. Normally, vertex attribute arrays are indexed based on values from the index buffer, or when doing array rendering once per vertex from the start point to the end. | |||
Instance arrays are a mechanism to allow a buffer binding to designate that it is being indexed, not by the the vertex index, but by the ''instance'' index. This means all attributes that use that buffer binding will provide the same input value for each instance, changing values only when a new instance is selected. This is established by this function: | |||
{{funcdef|void {{apifunc|glVertexBindingDivisor}}(GLuint {{param|bindingindex}}, GLuint {{param|divisor}});}} | |||
The {{param|divisor}} determines how a particular buffer binding is indexed (and therefore how all attribute arrays that use this binding are indexed). If {{param|divisor}} is zero, then the binding is indexed once per vertex. {{param|divisor}} is non-zero, then the current instance is divided by this divisor (dropping any fractions), and the result of that is used to access the binding array. | |||
The "current instance" mentioned above starts at the base instance specified by the instanced rendering command, increasing by 1 for each instance in the draw call. Note that this is ''not'' how the {{code|gl_InstanceID}} is computed for [[Vertex Shader]]s; that is ''not'' affected by the base instance. If no base instance is specified by the rendering command, then the current instance starts with 0. | |||
Instance arrays are generally considered the most efficient way of getting per-instance data to the vertex shader. However, it is also the most resource-constrained method in some respects. OpenGL implementations usually offer a fairly restricted number of vertex attributes (16 or so), and you will need some of these for the actual per-vertex data. So that leaves less room for your per-instance data. While the number of instances can be arbitrarily large (unlike UBO arrays), the amount of per-instance data is much smaller. | |||
However, even 16 attributes that should be plenty of space for a quaternion orientation and a position, sufficient for a simple transformation. That even leaves one float (the position only needs to be 3D), which can be used to provide a fragment shader with an index to access an [[Array Texture]]. | |||
== Combined format and binding == | |||
The above sections describe the modern OpenGL APIs. There are older APIs, commonly used by tutorials and the like, for setting up VAOs as well. If you have access to {{require|4.3|vertex_attrib_binding}}, then you are advised to use the above functions. The ones defined in this section do essentially the same thing, to the point where the old functions are defined in terms of the new ones. But they're not as easy to use or to think about. | |||
The key to understanding the old mechanism is that it combines the vertex format definition ''and'' the buffer binding state into a single call. Or rather, three function calls, which match the {{code|glVertexAttrib*Format}} specifications: | |||
{{funcdef|void {{apifunc|glVertexAttribPointer}}( GLuint {{param|index}}, GLint {{param|size}}, GLenum {{param|type}}, GLboolean {{param|normalized}}, GLsizei {{param|stride}}, const void *{{param|offset}}); | |||
void {{apifunc|glVertexAttribIPointer}}( GLuint {{param|index}}, GLint {{param|size}}, GLenum {{param|type}}, GLsizei {{param|stride}}, const void *{{param|offset}} ); | |||
void {{apifunc|glVertexAttribLPointer}}( GLuint {{param|index}}, GLint {{param|size}}, GLenum {{param|type}}, GLsizei {{param|stride}}, const void *{{param|offset}} );}} | |||
{{param|index}} is the attribute index ''and'' the buffer binding index; values for both vertex formats and buffer binding points are set by these functions. These functions also perform the equivalent of {{apifunc|glVertexAttribBinding|({{param|index}}, {{param|index}})}}, so that the attribute uses its corresponding buffer binding index. | |||
{{param|size}}, {{param|type}} and {{param|normalized}} have the same meaning as in the [[#Vertex format|separate vertex format]] versions. | |||
The {{param|stride}} sets the stride for the buffer binding at the given {{param|index}}. However, in the combined API, each attribute index has its own buffer binding index. As such, the stride can be computed by computing the size of the attribute's format data. So if you provide a {{param|stride}} of 0, OpenGL will compute the size of an attribute based on its vertex format and use that for the stride. | |||
The {{param|offset}} is the really confusing part. | |||
See, the API says that it is a pointer, but this is only for historical reasons. In core profile OpenGL, this parameter is never a pointer to CPU-allocated memory. Instead, it is the base offset for the buffer binding. You pass that integer value to OpenGL by casting the integer to a pointer. In C, you would do something like {{code|(void*)byte_offset}}; C++ can do the same or you can use {{code|reinterpret_cast<void*>(byte_offset)}}. | |||
These functions also set the buffer object to be used by that binding. This buffer object is provided in an unusual way. Instead of being given as a parameter, you must first bind the buffer you wish to use for that binding to the {{enum|GL_ARRAY_BUFFER}} binding point. The above function will take whatever buffer is bound '''at the time those functions are called''' to be that binding point's buffer object. | |||
So after you call {{code|glVertexAttrib*Pointer}}, the buffer bound to {{enum|GL_ARRAY_BUFFER}} can be changed without affecting the VAO's buffer bindings. The {{enum|GL_ARRAY_BUFFER}} context binding point is just a global variable used to pass a parameter to these functions. | |||
{{apifunc|glEnableVertexAttribArray}} works as described above, as does the {{enum|GL_ELEMENT_ARRAY_BUFFER}} binding point for index buffers. | |||
=== Combined interleaving === | |||
[[#Interleaved attributes|Interleaved attribute arrays]] still work with the combined API, but they are a bit more difficult to specify. | |||
Because each attribute uses a separate buffer binding, to use interleaved arrays you have to have each attribute use the same buffer. The byte offsets you provide must also be incremented by the relative offset you would have used in the separate API. | |||
Consider the following struct: | |||
<source lang="cpp"> | |||
struct Vertex | |||
{ | |||
vec3 positions; | |||
vec3 normals; | |||
uvec4_byte colors; | |||
}; | |||
</source> | |||
The code using the combined API to build an array of {{code|Vertex}} structures is as follows: | |||
=== | <source lang="cpp"> | ||
glBindBuffer(GL_ARRAY_BUFFER, buff); | |||
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, position))); | |||
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, normal))); | |||
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, color))); | |||
</source> | |||
Where {{code|baseOffset}} is the byte offset within {{code|buff}} to the first {{code|Vertex}} in the array. Notice that the byte offset is combined with the relative offset for each attribute. Also note that {{code|GL_ARRAY_BUFFER}} is unchanged for the three calls. | |||
=== Combined instancing === | |||
{{infobox feature | {{infobox feature | ||
| name = | | name = Instanced arrays | ||
| core = | | core = 3.3 | ||
| | | arb_extension = {{extref|instanced_arrays}} | ||
}} | }} | ||
Setting the [[#Instanced arrays|instance divisor]] with the combined format/binding API works by calling the following function: | |||
{{funcdef|void {{apifunc| | <br style="clear: both" /> | ||
{{funcdef|void {{apifunc|glVertexAttribDivisor}}(GLuint {{param|index}}, GLuint {{param|divisor}});}} | |||
{{param|index}} is the buffer binding index to set the divisor for, as if by a call to {{apifunc|glVertexBindingDivisor|({{param|index}}, {{param|divisor}})}}. However, this function also performs the equivalent of {{apifunc|glVertexAttribBinding|({{param|index}}, {{param|index}})}}. | |||
The | The meaning of the divisor is unchanged from the API. | ||
== Matrix attributes == | == Matrix attributes == | ||
[[Vertex Shader]] inputs in GLSL can be of matrix types. However, attribute binding functions only handle 1D vectors with up to 4 components. OpenGL solves this problem by converting matrix inputs into multiple sub-vectors, with each sub-vector having its own attribute index. | |||
If you directly assign an attribute index to a matrix type, | If you directly assign an attribute index to a matrix type, the variable will take up more than one attribute index. The number of attributes a matrix takes up is the number of columns of the matrix: a {{code|mat2}} matrix will take 2, a {{code|mat2x4}} matrix will take 2, while a {{code|mat4x2}} will take 4. The number of components for each attribute is the number of rows of the matrix. So a {{code|mat4x2}} will take 4 attribute locations and use 2 components in each attribute. | ||
Each bound attribute in the VAO therefore fills in a single column, starting with the left-most column and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the matrix column 0, attribute 4 is column 1, and 5 is column 2. | Each bound attribute in the VAO therefore fills in a single column, starting with the left-most column and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the matrix column 0, attribute 4 is column 1, and 5 is column 2. | ||
OpenGL will allocate locations for matrix | OpenGL will allocate attribute locations for matrix inputs contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes. | ||
Double-precision matrices (where available) will take up twice as much space per-component. So a {{code|dmat3x3}} will take up 6 attribute indices, two indices for each column. Whereas a {{code|dmat3x2}} will take up only 3 attribute indices, with one index per column. | Double-precision matrices (where available) will take up twice as much space per-component. So a {{code|dmat3x3}} will take up 6 attribute indices, two indices for each column. Whereas a {{code|dmat3x2}} will take up only 3 attribute indices, with one index per column. | ||
Line 406: | Line 482: | ||
== Non-array attribute values == | == Non-array attribute values == | ||
A vertex shader can read an attribute that is not currently enabled ( | A vertex shader can read an attribute that is not currently arrayed. This means that the VAO vertex format has the array enabled state set to disable (which is the default value). | ||
Non-arrayed attributes get their values from special context state, which is ''not part'' of the VAO. | |||
Because the attribute is defined by context state, it is constant over the course of a single [[Vertex Rendering|draw call]]. Each attribute index has a separate value. | Because the attribute is defined by context state, it is constant over the course of a single [[Vertex Rendering|draw call]]. Each attribute index has a separate value. |
Latest revision as of 15:15, 22 April 2019
Vertex Specification is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.
Theory
Submitting vertex data for rendering requires creating a stream of vertices, and then telling OpenGL how to interpret that stream.
Vertex Stream
In order to render at all, you must be using a shader program or program pipeline which includes a Vertex Shader. The VS's user-defined input variables defines the list of expected Vertex Attributes for that shader. This set of attributes defines what values the vertex stream must provide to properly render with this shader.
For each attribute in the shader, you must provide an array of data for that attribute. All of these arrays must have the same number of elements. Note that these arrays are a bit more flexible than C arrays, but overall work the same way.
The order of vertices in the stream is very important; this order defines how OpenGL will process and render the Primitives the stream generates. There are two ways of rendering with arrays of vertices. You can generate a stream in the array's order, or you can use a list of indices to define the order. The indices control what order the vertices are received in, and indices can specify the same array element more than once.
Let's say you have the following as your array of 3d position data:
{ {1, 1, 1}, {0, 0, 0}, {0, 0, 1} }
If you simply use this as a stream as is, OpenGL will receive and process these three vertices in order (left-to-right). However, you can also specify a list of indices that will select which vertices to use and in which order.
Let's say we have the following index list:
{2, 1, 0, 2, 1, 2}
If we render with the above attribute array, but selected by the index list, OpenGL will receive the following stream of vertex attribute data:
{ {0, 0, 1}, {0, 0, 0}, {1, 1, 1}, {0, 0, 1}, {0, 0, 0}, {0, 0, 1} }
The index list is a way of reordering the vertex attribute array data without having to actually change it. This is mostly useful as a means of data compression; in most tight meshes, vertices are used multiple times. Being able to store the vertex attributes for that vertex only once is very economical, as a vertex's attribute data is generally around 32 bytes, while indices are usually 2-4 bytes in size.
A vertex stream can of course have multiple attributes. You can take the above position array and augment it with, for example, a texture coordinate array:
{ {0, 0}, {0.5, 0}, {0, 1} }
The vertex stream you get will be as follows:
{ [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{1, 1, 1}, {0, 0}], [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{0, 0, 1}, {0, 1}] }
Primitives
The above stream is not enough to actually draw anything; you must also tell OpenGL how to interpret this stream. And this means telling OpenGL what kind of primitive to interpret the stream as.
There are many ways for OpenGL to interpret a stream of, for example, 12 vertices. It can interpret the vertices as a sequence of triangles, points, or lines. It can even interpret these differently; it can interpret 12 vertices as 4 independent triangles (take every 3 verts as a triangle), as 10 dependent triangles (every group of 3 sequential vertices in the stream is a triangle), and so on.
The main article on Primitives has the details.
Vertex Array Object
Core in version | 4.6 | |
---|---|---|
Core since version | 3.0 | |
Core ARB extension | ARB_vertex_array_object |
A Vertex Array Object (VAO) is an OpenGL Object that stores all of the state needed to supply vertex data in arrays to a Vertex Rendering command. As a typical OpenGL object, one must bind the object before using it.
However, unlike most OpenGL objects, the functions that manipulate it aren't always named consistently. So it is not always easy to tell which functions manipulate a VAO and which do not.
Anatomy
When rendering, OpenGL takes a number of vertex streams defined in arrays and reads the data from them, passing that data as attributes to the current Vertex Shader. To do this, OpenGL needs to understand the way data is stored in these arrays.
In C or C++, this would be easy. You have a variable declaration like this: vec4 attrib[20]. The compiler will use that to figure out how to convert code like attrib[10] into a specific address. So let us dissect the C++ array declaration, vec4 attrib[20].
This declaration has a name, attrib, which identifies that specific array. Not only does this array object have a name, by declaring it we are also giving it storage. The array has 20 elements in it, and each element is of the type vec4. Given this information, the compiler can determine the size of an element (based on the sizeof(vec4)). And because arrays in C/C++ are tightly packed, the compiler knows that attrib[X] is X * sizeof(vec4) bytes from the start of the array.
A vertex array object is like an array variable declaration, though obviously it is a lot more verbose. Also, VAOs contain the specification for potentially multiple arrays of vertex data.
VAO data is split into two distinct kinds of data: a sequence of vertex formats and a sequence of buffer binding points. The vertex format describes what a single element of a particular array's data looks like; it is the equivalent of describing what a type like vec4 means to the compiler.
The buffer binding point tells OpenGL about the array itself: where the storage for the array is, how many bytes are between elements in the array, and so forth.
Structurally speaking, think of a VAO like a set of C++ structs defined as follows:
struct VertexFormat
{
bool arrayEnabled;
BaseType baseType; //An enumeration of Float, Integer, and Double.
GLuint componentCount;
GLenum componentType; //An enumeration of OpenGL types.
bool normalized;
GLuint relativeOffset;
GLuint bufferBindingIndex;
};
struct BufferBinding
{
GLuint bufferObject
GLintptr baseOffset;
GLintptr stride;
GLuint instanceDivisor;
};
struct VertexArrayObject
{
AttributeFormat attribs[GL_MAX_VERTEX_ATTRIBS];
BufferBinding bindings[GL_MAX_VERTEX_ATTRIB_BINDINGS];
GLuint indexBuffer;
};
Attribute arrays
For every Vertex Attribute, a VAO contains a corresponding vertex format, indexed by the attribute index which it describes. Vertex attributes are indexed on the range [0, GL_MAX_VERTEX_ATTRIBS - 1]. All commands that manipulate the format for an attribute take a parameter named index or attribindex that specifies the attribute being modified.
Vertex formats tell OpenGL whether or not that attribute gets its data from an array at all. This is governed by the array enabled/disable state for the attribute. A newly created VAO sets all of its vertex formats to be disabled. Enabling an attribute for array access uses this function:
There is a similar glDisableVertexAttribArray function to disable an attribute array. Initially, all attributes for a VAO are disabled.
If an attribute's array is disabled, then the rest of the state for the vertex format is irrelevant.
Each enabled attribute array gets its data from a source buffer. Because multiple attributes can read data from the same buffer, this information is specified by as an index into the array of buffer binding points. Each vertex format specifies which buffer binding point provides the storage for the array.
By default, the buffer binding index for an attribute is the same as the attribute index. With OpenGL 4.3 or ARB_vertex_attrib_binding, the buffer buffer binding index that provides storage for an attribute is specified with:
bindingindex is the index of the buffer binding point that provides the storage for the attribute array specified by attribindex.
While each vertex format can only source data from a single buffer binding point, multiple buffer binding points can be used by different attributes. This allows multiple arrays to live in the same memory, so as to provide the ability to have arrays of larger data structures.
Without OpenGL 4.3 or ARB_vertex_attrib_binding, the buffer binding index is always the same as the attribute index.
Vertex format
The previous format information are equivalent to telling C++ that there is a variable named attrib and that it is an array that has storage. The rest of the information in an vertex format describes that element's data in the format.
The format parameters describe how to interpret a single vertex of information from the array. Vertex Shader input variables can be declared as a single-precision floating-point GLSL type (such as float or vec4), an integral type (such as uint or ivec3), or a double-precision type (such as double or dvec4). Double-precision attributes are only available in OpenGL 4.1 or ARB_vertex_attrib_64bit.
These three basic types correspond to the three functions used to define the rest of the vertex format's data:
void glVertexAttribFormat(GLuint attribindex, GLint size, GLenum type, GLboolean normalized, GLuint relativeoffset);
void glVertexAttribIFormat(GLuint attribindex, GLint size, GLenum type, GLuint relativeoffset);
void glVertexAttribLFormat(GLuint attribindex, GLint size, GLenum type, GLuint relativeoffset);These three functions, in addition to the other parameters of these functions, each set the basic type of the attribute: float, integer, or double, respectively. So if you call glVertexAttribIFormat, then you have set the basic type for attribindex to integer. If the vertex shader input variable with the attribindex attribute location does not match the basic type set by glVertexAttrib*Format, then the value of the attribute in the VS will be undefined when you render.
The meaning of the relativeoffset parameter will be discussed in the section on buffer bindings. While it is a format parameter, it means a great deal to how the attribute gets its data from the buffer it is associated with.
Each individual attribute index represents a single vector, which has 1 to 4 components. The size parameter of the glVertexAttrib*Format functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4.
Note that size does not have to exactly match the size used by the vertex shader. If the vertex shader reads fewer components than the attribute provides, then the extras are simply ignored (they will likely be read and processed from memory, but those values won't be used in the VS). If the vertex shader has more components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the missing XYZW components.
The latter is not true for double-precision inputs (OpenGL 4.1 or ARB_vertex_attrib_64bit). If the shader attribute has more components than the provided value, the extra components will have undefined values.
Also, VS inputs can use the component layout qualifier, to allow multiple VS input variables to read from the same attribute index. In this case, all input variables that use the same attribute index must also share the same basic type (float, integer, or double).
Component type
The type of the vector component in the buffer object is given by the type and normalized parameters, where applicable. This type will be converted into the actual type used by the vertex shader. The different glVertexAttrib*Format functions take different types. Here is a list of the types and their meanings for each function:
- Floating-point types. normalized must be GL_FALSE
- GL_HALF_FLOAT: A 16-bit half-precision floating-point value. Equivalent to GLhalf.
- GL_FLOAT: A 32-bit single-precision floating-point value. Equivalent to GLfloat.
- GL_DOUBLE: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to GLdouble.
- GL_FIXED: A 16.16-bit fixed-point two's complement value. Equivalent to GLfixed.
- Integer types; these are converted to floats automatically. If normalized is GL_TRUE, then the value will be converted to a float via integer normalization (an unsigned byte value of 255 becomes 1.0f). If normalized is GL_FALSE, it will be converted directly to a float as if by C-style casting (255 becomes 255.0f, regardless of the size of the integer).
- GL_BYTE: A signed 8-bit two's complement value. Equivalent to GLbyte.
- GL_UNSIGNED_BYTE: An unsigned 8-bit value. Equivalent to GLubyte.
- GL_SHORT: A signed 16-bit two's complement value. Equivalent to GLshort.
- GL_UNSIGNED_SHORT: An unsigned 16-bit value. Equivalent to GLushort.
- GL_INT: A signed 32-bit two's complement value. Equivalent to GLint.
- GL_UNSIGNED_INT: An unsigned 32-bit value. Equivalent to GLuint.
- GL_INT_2_10_10_10_REV: A series of four values packed in a 32-bit unsigned integer. Each individual packed value is a two's complement signed integer, but the overall bitfield is unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size must be 4 (or GL_BGRA, as shown below).
- GL_UNSIGNED_INT_2_10_10_10_REV: A series of four values packed in a 32-bit unsigned integer. The packed values are unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size must be 4 (or GL_BGRA, as shown below).
- GL_UNSIGNED_INT_10F_11F_11F_REV: Requires OpenGL 4.4 or ARB_vertex_type_10f_11f_11f_rev. This represents a 3-element vector of floats, packed into a 32-bit unsigned integer. The bitdepth for the packed fields is 10, 11, 11, but in reverse order. So the lowest 11 bits are the first component, the next 11 are the second, and the last 10 are the third. These floats are the low bitdepth floats, packed exactly like the image format GL_R11F_G11F_B10F. If you use this, the size must be 3.
glVertexAttribIFormat: This function only feeds attributes declared in GLSL as signed or unsigned integers, or vectors of the same.
- GL_BYTE: A signed 8-bit two's complement value. Equivalent to GLbyte.
- GL_UNSIGNED_BYTE: An unsigned 8-bit value. Equivalent to GLubyte.
- GL_SHORT: A signed 16-bit two's complement value. Equivalent to GLshort.
- GL_UNSIGNED_SHORT: An unsigned 16-bit value. Equivalent to GLushort.
- GL_INT: A signed 32-bit two's complement value. Equivalent to GLint.
- GL_UNSIGNED_INT: An unsigned 32-bit value. Equivalent to GLuint.
glVertexAttribLFormat: This function only feeds attributes declared in GLSL as double or vectors of the same.
- GL_DOUBLE: A 64-bit double-precision float value. Equivalent to GLdouble.
Here is a visual demonstration of the ordering of the 2_10_10_10_REV types:
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 | W | Z | Y | X | -----------------------------------------------------------------------------------------------
Here is a visual demonstration of the ordering of the 10F_11F_11F_REV type:
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 | Z | Y | X | -----------------------------------------------------------------------------------------------
D3D compatibility
Core in version | 4.6 | |
---|---|---|
Core since version | 3.2 | |
Core ARB extension | ARB_vertex_array_bgra |
When using vertex formats that have a single-precision floating-point basic type (ie: not integer or double) the size field can be the enumerator GL_BGRA, in addition to the numbers 1-4.
This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components.
This special mode is intended specifically for compatibility with certain Direct3D vertex formats. Because of that, this special size can only be used in conjunction with:
- type must be GL_UNSIGNED_BYTE, GL_INT_2_10_10_10_REV or GL_UNSIGNED_INT_2_10_10_10_REV
- normalized must be GL_TRUE
So you cannot pass non-normalized values with this special size.
Here is a visual description:
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 | W | X | Y | Z | -----------------------------------------------------------------------------------------------
Notice how X comes second and the Z last. X is equivalent to R and Z is equivalent to B, so it comes in the reverse of BGRA order: ARGB.
Buffer bindings
The vertex format information describes to OpenGL the meaning of a conceptual C++ type declaration. The purpose of a buffer binding is to provide storage for an array of such conceptual declarations.
OpenGL provides a number of buffer bindings, indexed on the range [0, GL_MAX_VERTEX_ATTRIB_BINDINGS - 1]. Each binding acts as an array, with all of the information needed to convert an index into that array into an address in memory. Vertex formats define what a particular element in the array looks like, but the buffer binding explains how to get from one index in memory to the next.
Given a C++ declaration vec3 arr[20];, the expression arr[3] performs an array indexing operation. This produces the address of a specific vec3 at the given array index. To evaluate this expression, the compiler must perform the following steps:
- get the address of the array named by arr.
- calculate the size a single array element. This is the size of the vec3 data type.
- multiply the element size by the array index 3.
- Add the previous value to the address of the array.
These steps produce the memory address of arr[3] in C++. OpenGL vertex arrays need to do the same kind of work in order for OpenGL to read the data from an array.
The "address" of an array in OpenGL is defined by Buffer Object and a byte offset into that buffer object. The purpose of the byte offset is to allow multiple arrays to come from the same buffer object, or just to allow the same buffer to store different kinds of information.
A single attribute has enough information to compute the size of an element of its type, much like sizeof(vec3). In OpenGL however, multiple attributes can read from the same buffer binding. As such, a binding cannot compute the element size from the type information of its attribute(s). So instead, this size must be specified directly by the user when establishing the buffer binding.
These pieces of information for a buffer binding are specified by the following function:
The bindingindex specifies which binding is being defined by this function. buffer is the buffer object to be used by this binding; providing a value of 0 means that no buffer is used (rendering while an array-enabled attribute that uses a buffer binding with a buffer object of 0 is an error). offset is the byte offset into buffer where the buffer binding's data starts. stride specifies the number of bytes between indices in the array.
As such, the formula for computing the address of any index is as simple as follows:
addressof(buffer) + offset + (stride * index)
The above address computation works fine when we have a single attribute pulling data from a buffer binding. But consider this C++ array definition:
struct Vertex
{
vec2 alpha;
ivec4 beta;
};
Vertex arr[20];
So, how does C++ compute the address for arr[3].beta? It works almost exactly like above, but with one extra step:
- get the address of the array named by arr.
- calculate the size a single array element. This is the size of the Vertex data type.
- multiply the element size by the array index 3.
- Add the previous value to the address of the array.
- Add to the previous value the byte offset from the beginning of a Vertex to its beta member.
Each sub-member of the data structure has a particular byte offset, relative to the definition of Vertex. In C++, alpha would have a byte offset of 0, while beta would have an offset of 8 (assuming vec2 contains 2 32-bit floats and no padding).
In OpenGL, the array arr is a buffer binding, and Vertex is its type. So the stride of the array would be the size of Vertex.
The equivalent of the members alpha and beta in OpenGL would be different attributes. To complete the analogy, we need each attribute to have a byte offset which is used in conjunction with the buffer binding to find the address of that particular attribute's data.
This offset is provided by the relativeoffset parameter of the glVertexAttrib*Format functions. So the full formula for computing the address of a specific attribute based on the index in an array is as follows:
addressof(buffer) + offset + (stride * index) + relativeoffset
Note that relativeoffset has much more strict limits than the buffer binding's offset. The limit on relativeoffset is queried through GL_MAX_VERTEX_ATTRIB_RELATIVE_OFFSET, and is only guaranteed to be at least 2047 bytes. Also, note that relativeoffset is a GLuint (32-bits), while offset is a GLintptr, which is the size of the pointer (so 64-bits in a 64-bit build). So obviously the relativeoffset is a much more limited quantity.
2047 bytes is usually sufficient for dealing with struct-like vertex attributes.
Interleaved attributes
Doing what we did with putting individual attributes in a struct that is used as an array is called "interleaving" the attributes. Let us look at the code difference between the case of individual arrays and a single, interleaved array.
Here are the C++ definitions:
//Individual arrays
vec3 positions[VERTEX_COUNT];
vec3 normals[VERTEX_COUNT];
uvec4_byte colors[VERTEX_COUNT];
};
//Interleaved arrays
struct Vertex
{
vec3 positions;
vec3 normals;
uvec4_byte colors;
};
Vertex vertices[VERTEX_COUNT];
The equivalent OpenGL definition for these would be as follows:
//Individual arrays
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, 0);
glVertexAttribBinding(0, 0);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, 0);
glVertexAttribBinding(1, 1);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0);
glVertexAttribBinding(2, 2);
glBindVertexBuffer(0, position_buff, 0, sizeof(positions[0]));
glBindVertexBuffer(1, normal_buff, 0, sizeof(normals[0]));
glBindVertexBuffer(2, color_buff, 0, sizeof(colors[0]));
//Interleaved arrays
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, offsetof(Vertex, positions));
glVertexAttribBinding(0, 0);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, offsetof(Vertex, normals));
glVertexAttribBinding(1, 0);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, offsetof(Vertex, colors));
glVertexAttribBinding(2, 0);
glBindVertexBuffer(0, buff, 0, sizeof(Vertex));
The macro offsetof computes the byte offset of the given field in the given struct. This is used as the relativeoffset of each attribute.
As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with those that change at the same time.
Multibind and separation
Core in version | 4.6 | |
---|---|---|
Core since version | 4.4 | |
Core ARB extension | ARB_multi_bind |
It is often useful as a developer to maintain the same format while quickly switching between multiple buffers to pull data from. To achieve this, the following function is available:
buffers, offsets, and strides are arrays of count buffers, base offsets, and strides. So this function is mostly equivalent to calling glBindVertexBuffer (note the lack of the "s" at the end) on all count elements of the buffers, offsets, and strides arrays. On each call, the buffer binding index is incremented, starting at first.
There is one difference. If buffers is NULL, then the function will completely ignore offsets and strides. It will simply bind 0 to every buffer binding index specified by first and count. This provides an effective means for unattaching a number of buffers from the VAO.
Index buffers
Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a Buffer Object bound to the GL_ELEMENT_ARRAY_BUFFER binding point.
When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER, all drawing commands of the form gl*Draw*Elements* will use indexes from that buffer. Indices can be unsigned bytes, unsigned shorts, or unsigned ints. If no index buffer is bound, and you use an "Elements" rendering function, you get an error.
Instanced arrays
Core in version | 4.6 | |
---|---|---|
Core since version | 3.3 | |
ARB extension | ARB_instanced_arrays |
Instanced rendering is the ability to render multiple objects using the same vertex data. Successful uses of instanced rendering require a mechanism to tell the shader which mesh instance the shader invocation is being executed on. This allows the user to provide things like transformation matrices, so that the individual objects in the instanced rendering command can appear in different places.
The typical way to provide this per-instance information is to use gl_InstanceID, which is a special vertex shader input. This index would be used to fetch data from a Uniform Buffer Object, Buffer Texture, or some other bulk storage mechanism.
An alternative way to get per-instance information to the shader is to change how vertex arrays are indexed. Normally, vertex attribute arrays are indexed based on values from the index buffer, or when doing array rendering once per vertex from the start point to the end.
Instance arrays are a mechanism to allow a buffer binding to designate that it is being indexed, not by the the vertex index, but by the instance index. This means all attributes that use that buffer binding will provide the same input value for each instance, changing values only when a new instance is selected. This is established by this function:
The divisor determines how a particular buffer binding is indexed (and therefore how all attribute arrays that use this binding are indexed). If divisor is zero, then the binding is indexed once per vertex. divisor is non-zero, then the current instance is divided by this divisor (dropping any fractions), and the result of that is used to access the binding array.
The "current instance" mentioned above starts at the base instance specified by the instanced rendering command, increasing by 1 for each instance in the draw call. Note that this is not how the gl_InstanceID is computed for Vertex Shaders; that is not affected by the base instance. If no base instance is specified by the rendering command, then the current instance starts with 0.
Instance arrays are generally considered the most efficient way of getting per-instance data to the vertex shader. However, it is also the most resource-constrained method in some respects. OpenGL implementations usually offer a fairly restricted number of vertex attributes (16 or so), and you will need some of these for the actual per-vertex data. So that leaves less room for your per-instance data. While the number of instances can be arbitrarily large (unlike UBO arrays), the amount of per-instance data is much smaller.
However, even 16 attributes that should be plenty of space for a quaternion orientation and a position, sufficient for a simple transformation. That even leaves one float (the position only needs to be 3D), which can be used to provide a fragment shader with an index to access an Array Texture.
Combined format and binding
The above sections describe the modern OpenGL APIs. There are older APIs, commonly used by tutorials and the like, for setting up VAOs as well. If you have access to OpenGL 4.3 or ARB_vertex_attrib_binding, then you are advised to use the above functions. The ones defined in this section do essentially the same thing, to the point where the old functions are defined in terms of the new ones. But they're not as easy to use or to think about.
The key to understanding the old mechanism is that it combines the vertex format definition and the buffer binding state into a single call. Or rather, three function calls, which match the glVertexAttrib*Format specifications:
void glVertexAttribPointer( GLuint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const void *offset);
void glVertexAttribIPointer( GLuint index, GLint size, GLenum type, GLsizei stride, const void *offset );
void glVertexAttribLPointer( GLuint index, GLint size, GLenum type, GLsizei stride, const void *offset );index is the attribute index and the buffer binding index; values for both vertex formats and buffer binding points are set by these functions. These functions also perform the equivalent of glVertexAttribBinding(index, index), so that the attribute uses its corresponding buffer binding index.
size, type and normalized have the same meaning as in the separate vertex format versions.
The stride sets the stride for the buffer binding at the given index. However, in the combined API, each attribute index has its own buffer binding index. As such, the stride can be computed by computing the size of the attribute's format data. So if you provide a stride of 0, OpenGL will compute the size of an attribute based on its vertex format and use that for the stride.
The offset is the really confusing part.
See, the API says that it is a pointer, but this is only for historical reasons. In core profile OpenGL, this parameter is never a pointer to CPU-allocated memory. Instead, it is the base offset for the buffer binding. You pass that integer value to OpenGL by casting the integer to a pointer. In C, you would do something like (void*)byte_offset; C++ can do the same or you can use reinterpret_cast<void*>(byte_offset).
These functions also set the buffer object to be used by that binding. This buffer object is provided in an unusual way. Instead of being given as a parameter, you must first bind the buffer you wish to use for that binding to the GL_ARRAY_BUFFER binding point. The above function will take whatever buffer is bound at the time those functions are called to be that binding point's buffer object.
So after you call glVertexAttrib*Pointer, the buffer bound to GL_ARRAY_BUFFER can be changed without affecting the VAO's buffer bindings. The GL_ARRAY_BUFFER context binding point is just a global variable used to pass a parameter to these functions.
glEnableVertexAttribArray works as described above, as does the GL_ELEMENT_ARRAY_BUFFER binding point for index buffers.
Combined interleaving
Interleaved attribute arrays still work with the combined API, but they are a bit more difficult to specify.
Because each attribute uses a separate buffer binding, to use interleaved arrays you have to have each attribute use the same buffer. The byte offsets you provide must also be incremented by the relative offset you would have used in the separate API.
Consider the following struct:
struct Vertex
{
vec3 positions;
vec3 normals;
uvec4_byte colors;
};
The code using the combined API to build an array of Vertex structures is as follows:
glBindBuffer(GL_ARRAY_BUFFER, buff);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, position)));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, normal)));
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, color)));
Where baseOffset is the byte offset within buff to the first Vertex in the array. Notice that the byte offset is combined with the relative offset for each attribute. Also note that GL_ARRAY_BUFFER is unchanged for the three calls.
Combined instancing
Core in version | 4.6 | |
---|---|---|
Core since version | 3.3 | |
ARB extension | ARB_instanced_arrays |
Setting the instance divisor with the combined format/binding API works by calling the following function:
index is the buffer binding index to set the divisor for, as if by a call to glVertexBindingDivisor(index, divisor). However, this function also performs the equivalent of glVertexAttribBinding(index, index).
The meaning of the divisor is unchanged from the API.
Matrix attributes
Vertex Shader inputs in GLSL can be of matrix types. However, attribute binding functions only handle 1D vectors with up to 4 components. OpenGL solves this problem by converting matrix inputs into multiple sub-vectors, with each sub-vector having its own attribute index.
If you directly assign an attribute index to a matrix type, the variable will take up more than one attribute index. The number of attributes a matrix takes up is the number of columns of the matrix: a mat2 matrix will take 2, a mat2x4 matrix will take 2, while a mat4x2 will take 4. The number of components for each attribute is the number of rows of the matrix. So a mat4x2 will take 4 attribute locations and use 2 components in each attribute.
Each bound attribute in the VAO therefore fills in a single column, starting with the left-most column and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the matrix column 0, attribute 4 is column 1, and 5 is column 2.
OpenGL will allocate attribute locations for matrix inputs contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.
Double-precision matrices (where available) will take up twice as much space per-component. So a dmat3x3 will take up 6 attribute indices, two indices for each column. Whereas a dmat3x2 will take up only 3 attribute indices, with one index per column.
Non-array attribute values
A vertex shader can read an attribute that is not currently arrayed. This means that the VAO vertex format has the array enabled state set to disable (which is the default value).
Non-arrayed attributes get their values from special context state, which is not part of the VAO.
Because the attribute is defined by context state, it is constant over the course of a single draw call. Each attribute index has a separate value.
The initial value for these is a floating-point (0.0, 0.0, 0.0, 1.0). Just as with array attribute values, non-array values are typed to float, integral, or double-precision (where available).
To change the value, you use a function of this form:
void glVertexAttrib*(GLuint index, Type values);
void glVertexAttribN*(GLuint index, Type values);
void glVertexAttribP*(GLuint index, GLenum type, GLboolean normalized, Type values);
void glVertexAttribI*(GLuint index, Type values);
void glVertexAttribL*(GLuint index, Type values);The * is the type descriptor, using the traditional OpenGL syntax. The index is the attribute index to set. The Type is whatever type is appropriate for the * type specifier. If you set fewer than 4 of the values in the attribute, the rest will be filled in by (0, 0, 0, 1), as is the same with array attributes. And just as for attributes provided by arrays, double-precision inputs (GL 4.1 or ARB_vertex_attrib_64bit) that having more components than provided leaves the extra components with undefined values.
The N version of these functions provide values that are normalized, either signed or unsigned as per the function's type. The unadorned versions always assume integer values are not normalized. The P versions are for packed integer types, and they can be normalized or not. All three of these variants provide float attribute data, so they convert integers to floats.
To provide non-array integral values for integral attributes, use the I versions. For double-precision attributes (using the same rules for attribute index counts as double-precision arrays), use L.
Note that these non-array attribute values are not part of the VAO state; they are context state. Changes to them do not affect the VAO.
Drawing
Once the VAO has been properly set up, the arrays of vertex data can be rendered as a Primitive. OpenGL provides innumerable different options for rendering vertex data.
See Also
- Primitive
- Vertex Rendering
- Conditional Rendering
- Vertex Attribute
- Vertex Specification Best Practices
Reference
- Category:Core API Ref Vertex Arrays: Reference documentation for vertex array setup functions.
- Category:Core API Ref Vertex Specification: Reference documentation for functions that affect certain state used to render.