With Uniform Buffer Objects, Shader Storage Buffer Objects, and various other means, it is possible to communicate state information to a Shader without having to modify any OpenGL context state. You simply set data into the appropriate buffer objects, then make sure that those buffers are bound when it comes time to render with the shader.
There are lines of communication for which this cannot work. Specifically, the Opaque Types in GLSL: samplers, images, and atomic counters. These all derive their data based on objects bound to locations in the OpenGL context at the time of the rendering call.
This has two performance bearing consequences. The immediate performance cost is that you must bind textures and images to the context before rendering. This process has an intrinsic cost.
A second consequence is that you must issue more rendering calls. The reason being that you need to switch textures for different objects. If the objects could fetch which textures to use solely from memory (UBOs, SSBOs, etc), then one could render a number of objects with the same multi-draw drawing command. Indeed, with Indirect Rendering, it becomes possible to generate those rendering commands on the GPU. In a perfect world, this would reduce rendering to little more than a compute dispatch operation followed by a single multi-draw-indirect operation.
This can only work if the shader can get its textures from values in memory, rather than data bound to the context. This is the purpose of bindless texturing; to remove an obstacle that makes this possible.
Bindless texturing is a bit complex. This represents a general overview of the process and the concepts that it uses.
The basic idea of bindless textures is to convert a Texture object into an integer (technically, OpenGL Objects are already integer numbers, but never mind that). This integer is called a handle, and it is an unsigned, 64-bit integer.
Handles can be created from a Texture object alone. Such a handle refers to the texture using the sampler parameters within the texture object.
Handles can also be created from a Texture and Sampler Object. A handle created from a texture+sampler represents using that texture with that particular sampler object. Handles created by either one of these two processes are called texture handles.
Lastly, handles can be created from a specific image within the texture. These are called image handles. Image handles are intended to be used for Image Load Store operations, and cannot be used for regular sampler accesses. Similarly, texture handles cannot be used for image load/store access.
An outgrowth of the above is that an object (texture or sampler) can have multiple handles associated with it. A texture could get texture handles with different samplers, or a texture and image handle, or multiple image handles for the multiple images it stores. A single sampler object can be associated with multiple textures.
Once any such object has at least one handle associated with it, the object's state immediately becomes immutable. No functions that modify anything about the texture will work. This includes the Sampler Object used in texture+sampler handles. Note that this only applies to the state; you can update the contents of the storage for such textures, but not their parameters.
Furthermore, there is no way to undo this immutability. Once you get a handle that is associated with that object, its state is permanently frozen.
Once an appropriate handle is created, the handle cannot be used by any program until it is made resident. Texture and image handles use different residency functions, but the concept is the same.
Handles can remain resident for as long as you wish. Residency is removed by a separate function call.
The foundation of bindless texture usage is the ability of GLSL to convert a 64-bit unsigned integer texture handle value into a sampler or image variable. Thus, these types are no longer truly Opaque Types, though the operations you can perform on them are still quite limited (no arithmetic for example).
Non-Interface Block uniform variables of sampler and image types can be populated from handles rather than the index of a binding point. Sampler and image types can also be passed as inputs/outputs between Shader Stages (using the flat interpolation qualifier where needed). They can even be used as Vertex Attributes, where they are treated as 64-bit integers on the OpenGL side.
Sampler/image types cannot be used in Interface Blocks, but you can pass a 64-bit unsigned integer handle and convert it into a sampler/image type.
Bindless textures are not safe. The API is given fewer opportunities to ensure sane behavior; it is up to the programmer to maintain integrity. Furthermore, the consequences for mistakes are much more severe. Usually with OpenGL, if you do something wrong, you get either an OpenGL Error or undefined behavior. With bindless textures, if you do something wrong, the GPU can crash or your program can terminate. It might even bring down the whole OS.
Things to keep in mind:
- When converting an integer handle into a sampler/image variable, the type of sampler/image must match with the handle.
- The integer values used with handles must be actual handles returned by the handle APIs, and those handles must be resident when they are being used. So you can't perform "pointer arithmetic" or anything of the like on them; treat them as opaque values that happen to be 64-bit unsigned integers.
Texture handles are created using glGetTextureHandleARB or glGetTextureSamplerHandleARB.
GLuint64 glGetTextureHandleARB(GLuint texture);
GLuint64 glGetTextureSamplerHandleARB(GLuint texture, GLuint sampler);
These functions accept the name of a texture object and optionally a sampler object to produce a texture handle. Multiple invocations with the same texture (or texture/sampler pair) will produce the same handle.
Once a handle is created for a texture/sampler, none of its state can be changed. For Buffer Textures, this includes the buffer object that is currently attached to it (which also means that you cannot create a handle for a buffer texture that does not have a buffer attached). Not only that, in such cases the buffer object itself becomes immutable; it cannot be reallocated with glBufferData. Though just as with textures, its storage can still be mapped and have its data modified by other functions as normal.
The Border Color for bindless textures is quite weird. The applicable border color for the handle (the one in the sampling parameters of texture for glGetTextureHandleARB or the one in the sampler for glglGetTextureSamplerHandleARB) must be one of the following sets of values:
- For floating-point (including normalized integer) or depth formats:
- For signed/unsigned color or stencil formats:
Any other values provoke an error.
Image handles are created using glGetImageHandleARB.
These parameters have the same meaning as in glBindImageTexture.
Unique handles will be returned for each parameter value combination and multiple calls with the same values will produce the same handle.
Handles cannot be explicitly released. Handles are automatically reclaimed when the relevant underlying objects are deleted.
Before a handle can be used in a bindless operation, the data associated with it must be made resident. To affect the residency of a handle, use the following functions:
void glMakeTextureHandleResidentARB(GLuint64 handle);
void glMakeImageHandleResidentARB(uint64 handle, enum access);
void glMakeTextureHandleNonResidentARB(GLuint64 handle);
void glMakeImageHandleNonResidentARB(uint64 handle);
For glMakeImageHandleResidentARB, the access specifies whether the shader will read from, write to, or do both to the image behind the handle. The enumerators are GL_READ_ONLY, GL_WRITE_ONLY, or GL_READ_WRITE. If the shader violates this restriction (reading from a write-only image or vice-versa), undefined behavior results, including the possibility of crashing. Or worse.
Note that a handle is what is being made resident, not a texture. As such, if you have handles that refer to the same texture's storage, making one resident is not sufficient to use one of the other handles. Residency affects more than just the image data.
Conceptually, image data being resident means that it lives within GPU-accessible memory directly. The amount of storage available for resident images/textures may be less than the total storage for textures that is available. As such, you should attempt to minimize the time a texture spends being resident. Do not attempt to take steps like making textures resident/unresident every frame or something. But if you are finished using a texture for some time, make it unresident.
GLSL handle usage
In all cases when providing a handle to a texture, when the shader executes, that handle must be resident. Also, the provided handle must match the type of the texture/image; so a 2D texture handle must be used with a sampler2D variable.
Direct handle usage
This functionality allows sampler and image types to be directly used in more shader interfaces. While they cannot be declared as part of Interface Blocks, they can be declared as shader stage inputs and outputs. They cannot be declared as Fragment Shader outputs (for obvious reasons).
When used as Vertex Attributes, handles are fed by glVertexAttribLPointer and its ilk, using the GL_UNSIGNED_INT64_ARB data type. When used as other input/output types, they have the same limitation as any integer type: if they are interpolated, they must use the flat Interpolation qualifier.
Default block uniforms (variables outside of Interface Blocks) of sampler or image types are assumed to get their texture or image data from context state as normal. To allow them to use a handle, they must be declared with a special layout qualifier:
layout(bindless_sampler) uniform sampler2D bindless; layout(bindless_image) uniform image2D bindless2;
The qualifier type (sampler or image) must match the variable's type. If you want all such uniforms to use bindless handles, you may globally declare it so as follows:
layout(bindless_sampler) uniform; layout(bindless_image) uniform;
The qualifiers bound_sampler/image exist to mean that the sampler/image gets its data from context state. bound is the default.
Samplers and images defined as bindless can still use context state if you wish. Which they use depends on how you set the uniform from the OpenGL side. If you use glUniform1i as normal, then it will use context state.
To pass a handle to such a uniform, you must use one of these functions:
void glUniformHandleui64ARB(GLint location, GLuint64 value);
void glUniformHandleui64vARB(GLint location, GLsizei count, const GLuint64 *value);
void glProgramUniformHandleui64ARB(GLuint program, GLint location, GLuint64 value);
void glProgramUniformHandleui64vARB(GLuint program, GLint location, GLsizei count, const GLuint64 *values);
Handles as integers
If you have access to NV_vertex_attrib_integer_64bit, then you can use the uint64_t type in the shader. This can be used as integer values for inputs and outputs (including being fed by vertex attributes).
They can also be used in Interface Blocks. They have a size of 8 bytes, and in std140/430 layout, they have an alignment of 8 bytes.
uint64_t values can be converted to any sampler or image type using constructors: sampler2DArray(some_uint64). They can also be converted back to 64-bit integers.
If NV_vertex_attrib_integer_64bit is not available, then you can achieve much the same effect using the uvec2 type. The first component is the least-significant 4 bytes of the integer, with the second component being the most-significant 4 bytes of the integer. So if you store a 64-bit integer on a little-endian machine, you can read it directly as a uvec2.
You cannot pass 64-bit vertex attributes to a uvec2, but you can take the same data and pass it as a 2-element unsigned integer vector. Similarly, you can use uvec2 within an Interface Block to pass 64-bit integers.
Note however that with std140 layout, an array of uvec2 will have the same array stride and alignment as a vec4: 16-bytes. To avoid this, you can declare it as an array of uvec4 of half the size (rounded up), then pick out the two elements you need.
As with uint64_t, constructors can convert between between uvec2 and sampler/image types.
You can declare sampler/image variables as local variables, and you can initialize them from other sampler/image variables or by converting from integer values. You can leave a sampler/image variable uninitialized until later. Sampler/image variables can be passed into functions (but not used as return values).
Otherwise, there are no arithmetic operations that can be performed on a sampler/image.
Once you have a sampler/image variable, you can pass it to a texture/image function and use it as normal.
This is not a core feature of any OpenGL version; at present, it only exists as an extension. This is not for any of the reasons that an extension might become ubiquitous. It is instead a matter of practicality.
If OpenGL 4.4 required bindless texture support, then only hardware that could support bindless textures could implement a conforming 4.4 implementation. But not all 4.3 hardware can handle bindless textures. The OpenGL ARB decided to make the functional optional by having it be an extension.
It is implemented across much of the OpenGL 4.x hardware spectrum. Intel is mostly absent, but both AMD and NVIDIA have a lot of hardware that can handle it.