Copyright © 2014-2017 The Khronos Group Inc. All Rights Reserved.

This specification is protected by copyright laws and contains material proprietary to the Khronos Group, Inc. It or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast or otherwise exploited in any manner without the express prior written permission of Khronos Group. You may use this specification for implementing the functionality therein, without altering or removing any trademark, copyright or other notice from the specification, but the receipt or possession of this specification does not convey any rights to reproduce, disclose, or distribute its contents, or to manufacture, use, or sell anything that it may describe, in whole or in part.

Khronos Group grants express permission to any current Promoter, Contributor or Adopter member of Khronos to copy and redistribute UNMODIFIED versions of this specification in any fashion, provided that NO CHARGE is made for the specification and the latest available update of the specification for any version of the API is used whenever possible. Such distributed specification may be reformatted AS LONG AS the contents of the specification are not changed in any way. The specification may be incorporated into a product that is sold as long as such product includes significant independent work developed by the seller. A link to the current version of this specification on the Khronos Group web-site should be included whenever possible with specification distributions.

This specification has been created under the Khronos Intellectual Property Rights Policy, which is Attachment A of the Khronos Group Membership Agreement available at www.khronos.org/files/member_agreement.pdf. This specification contains substantially unmodified functionality from, and is a successor to, Khronos specifications including OpenGL, OpenGL ES and OpenCL.

Some parts of this Specification are purely informative and do not define requirements necessary for compliance and so are outside the Scope of this Specification. These parts of the Specification are marked by the “Note” icon or designated “Informative”.

Where this Specification uses terms, defined in the Glossary or otherwise, that refer to enabling technologies that are not expressly set forth as being required for compliance, those enabling technologies are outside the Scope of this Specification.

Where this Specification uses the terms “may”, or “optional”, such features or behaviors do not define requirements necessary for compliance and so are outside the Scope of this Specification.

Where this Specification uses the terms “not required”, such features or behaviors may be omitted from certain implementations, but when they are included, they define requirements necessary for compliance and so are INCLUDED in the Scope of this Specification.

Where this Specification includes normative references to external documents, the specifically identified sections and functionality of those external documents are in Scope. Requirements defined by external documents not created by Khronos may contain contributions from non-members of Khronos not covered by the Khronos Intellectual Property Rights Policy.

Khronos Group makes no, and expressly disclaims any, representations or warranties, express or implied, regarding this specification, including, without limitation, any implied warranties of merchantability or fitness for a particular purpose or non-infringement of any intellectual property. Khronos Group makes no, and expressly disclaims any, warranties, express or implied, regarding the correctness, accuracy, completeness, timeliness, and reliability of the specification. Under no circumstances will the Khronos Group, or any of its Promoters, Contributors or Members or their respective partners, officers, directors, employees, agents or representatives be liable for any damages, whether direct, indirect, special or consequential damages for lost revenues, lost profits, or otherwise, arising from or in connection with these materials.

Khronos is a trademark, and Vulkan is a registered trademark of The Khronos Group Inc. OpenCL is a trademark of Apple Inc. and OpenGL is a registered trademark of Silicon Graphics International, both used under license by Khronos.

1. Introduction

This chapter is Informative except for the sections on Terminology and Normative References.

This document, referred to as the “Vulkan Specification” or just the “Specification” hereafter, describes the Vulkan graphics system: what it is, how it acts, and what is required to implement it. We assume that the reader has at least a rudimentary understanding of computer graphics. This means familiarity with the essentials of computer graphics algorithms and terminology as well as with modern GPUs (Graphic Processing Units).

The canonical version of the Specification is available in the official Vulkan Registry, located at URL

1.1. What is the Vulkan Graphics System?

Vulkan is an API (Application Programming Interface) for graphics and compute hardware. The API consists of many commands that allow a programmer to specify shader programs, compute kernels, objects, and operations involved in producing high-quality graphical images, specifically color images of three-dimensional objects.

1.1.1. The Programmer’s View of Vulkan

To the programmer, Vulkan is a set of commands that allow the specification of shader programs or shaders, kernels, data used by kernels or shaders, and state controlling aspects of Vulkan outside of shader execution. Typically, the data represents geometry in two or three dimensions and texture images, while the shaders and kernels control the processing of the data, rasterization of the geometry, and the lighting and shading of fragments generated by rasterization, resulting in the rendering of geometry into the framebuffer.

A typical Vulkan program begins with platform-specific calls to open a window or otherwise prepare a display device onto which the program will draw. Then, calls are made to open queues to which command buffers are submitted. The command buffers contain lists of commands which will be executed by the underlying hardware. The application can also allocate device memory, associate resources with memory and refer to these resources from within command buffers. Drawing commands cause application-defined shader programs to be invoked, which can then consume the data in the resources and use them to produce graphical images. To display the resulting images, further platform-specific commands are made to transfer the resulting image to a display device or window.

1.1.2. The Implementor’s View of Vulkan

To the implementor, Vulkan is a set of commands that allow the construction and submission of command buffers to a device. Modern devices accelerate virtually all Vulkan operations, storing data and framebuffer images in high-speed memory and executing shaders in dedicated GPU processing resources.

The implementor’s task is to provide a software library on the host which implements the Vulkan API, while mapping the work for each Vulkan command to the graphics hardware as appropriate for the capabilities of the device.

1.1.3. Our View of Vulkan

We view Vulkan as a pipeline having some programmable stages and some state-driven fixed-function stages that are invoked by a set of specific drawing operations. We expect this model to result in a specification that satisfies the needs of both programmers and implementors. It does not, however, necessarily provide a model for implementation. An implementation must produce results conforming to those produced by the specified methods, but may carry out particular computations in ways that are more efficient than the one specified.

1.2. Filing Bug Reports

Issues with and bug reports on the Vulkan Specification and the API Registry can be filed in the Khronos Vulkan GitHub repository, located at URL

Please tag issues with appropriate labels, such as “Specification”, “Ref Pages” or “Registry”, to help us triage and assign them appropriately. Unfortunately, GitHub does not currently let users who do not have write access to the repository set GitHub labels on issues. In the meantime, they can be added to the title line of the issue set in brackets, e.g. ''[Specification]''.

1.3. Terminology

The key words must, required, should, recommend, may, and optional in this document are to be interpreted as described in RFC 2119:

must

When used alone, this word, or the term required, means that the definition is an absolute requirement of the specification. When followed by not (“must not” ), the phrase means that the definition is an absolute prohibition of the specification.

should

When used alone, this word means that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. When followed by not (“should not”), the phrase means that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label. In cases where grammatically appropriate, the terms recommend or recommendation may be used instead of should.

may

This word, or the adjective optional, means that an item is truly optional. One vendor may choose to include the item because a particular marketplace requires it or because the vendor feels that it enhances the product while another vendor may omit the same item. An implementation which does not include a particular option must be prepared to interoperate with another implementation which does include the option, though perhaps with reduced functionality. In the same vein an implementation which does include a particular option must be prepared to interoperate with another implementation which does not include the option (except, of course, for the feature the option provides).

The additional terms can and cannot are to be interpreted as follows:

can

This word means that the particular behavior described is a valid choice for an application, and is never used to refer to implementation behavior.

cannot

This word means that the particular behavior described is not achievable by an application. For example, an entry point does not exist, or shader code is not capable of expressing an operation.

Note

There is an important distinction between cannot and must not, as used in this Specification. Cannot means something the application literally is unable to express or accomplish through the API, while must not means something that the application is capable of expressing through the API, but that the consequences of doing so are undefined and potentially unrecoverable for the implementation.

1.4. Normative References

Normative references are references to external documents or resources to which implementers of Vulkan must comply.

IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2008, http://dx.doi.org/10.1109/IEEESTD.2008.4610935, August, 2008.

A. Garrard, Khronos Data Format Specification, version 1.1, https://www.khronos.org/registry/dataformat/specs/1.1/dataformat.1.1.html, June, 2016.

J. Kessenich, SPIR-V Extended Instructions for GLSL, Version 1.00, https://www.khronos.org/registry/spir-v/, February 10, 2016.

J. Kessenich and B. Ouriel, The Khronos SPIR-V Specification, Version 1.00, https://www.khronos.org/registry/spir-v/, February 10, 2016.

J. Leech and T. Hector, Vulkan Documentation and Extensions: Procedures and Conventions, https://www.khronos.org/registry/vulkan/, July 11, 2016

2. Fundamentals

This chapter introduces fundamental concepts including the Vulkan architecture and execution model, API syntax, queues, pipeline configurations, numeric representation, state and state queries, and the different types of objects and shaders. It provides a framework for interpreting more specific descriptions of commands and behavior in the remainder of the Specification.

2.1. Architecture Model

Vulkan is designed for, and the API is written for, CPU, GPU, and other hardware accelerator architectures with the following properties:

  • Runtime support for 8, 16, 32 and 64-bit signed and unsigned twos-complement integers, all addressable at the granularity of their size in bytes.

  • Runtime support for 32- and 64-bit floating-point types satisfying the range and precision constraints in the Floating Point Computation section.

  • The representation and endianness of these types must be identical for the host and the physical devices.

Note

Since a variety of data types and structures in Vulkan may be mapped back and forth between host and physical device memory, host and device architectures must both be able to access such data efficiently in order to write portable and performant applications.

Where the Specification leaves choices open that would affect Application Binary Interface compatibility on a given platform supporting Vulkan, those choices are usually made to be compliant to the preferred ABI defined by the platform vendor. Some choices, such as function calling conventions, may be made in platform-specific portions of the vk_platform.h header file.

Note

For example, the Android ABI is defined by Google, and the Linux ABI is defined by a combination of gcc defaults, distribution vendor choices, and external standards such as the Linux Standard Base.

2.2. Execution Model

This section outlines the execution model of a Vulkan system.

Vulkan exposes one or more devices, each of which exposes one or more queues which may process work asynchronously to one another. The set of queues supported by a device is partitioned into families. Each family supports one or more types of functionality and may contain multiple queues with similar characteristics. Queues within a single family are considered compatible with one another, and work produced for a family of queues can be executed on any queue within that family. This specification defines four types of functionality that queues may support: graphics, compute, transfer, and sparse memory management.

Note

A single device may report multiple similar queue families rather than, or as well as, reporting multiple members of one or more of those families. This indicates that while members of those families have similar capabilities, they are not directly compatible with one another.

Device memory is explicitly managed by the application. Each device may advertise one or more heaps, representing different areas of memory. Memory heaps are either device local or host local, but are always visible to the device. Further detail about memory heaps is exposed via memory types available on that heap. Examples of memory areas that may be available on an implementation include:

  • device local is memory that is physically connected to the device.

  • device local, host visible is device local memory that is visible to the host.

  • host local, host visible is memory that is local to the host and visible to the device and host.

On other architectures, there may only be a single heap that can be used for any purpose.

A Vulkan application controls a set of devices through the submission of command buffers which have recorded device commands issued via Vulkan library calls. The content of command buffers is specific to the underlying hardware and is opaque to the application. Once constructed, a command buffer can be submitted once or many times to a queue for execution. Multiple command buffers can be built in parallel by employing multiple threads within the application.

Command buffers submitted to different queues may execute in parallel or even out of order with respect to one another. Command buffers submitted to a single queue respect submission order, as described further in synchronization chapter. Command buffer execution by the device is also asynchronous to host execution. Once a command buffer is submitted to a queue, control may return to the application immediately. Synchronization between the device and host, and between different queues is the responsibility of the application.

2.2.1. Queue Operation

Vulkan queues provide an interface to the execution engines of a device. Commands for these execution engines are recorded into command buffers ahead of execution time. These command buffers are then submitted to queues with a queue submission command for execution in a number of batches. Once submitted to a queue, these commands will begin and complete execution without further application intervention, though the order of this execution is dependent on a number of implicit and explicit ordering constraints.

Work is submitted to queues using queue submission commands that typically take the form vkQueue* (e.g. vkQueueSubmit, vkQueueBindSparse), and optionally take a list of semaphores upon which to wait before work begins and a list of semaphores to signal once work has completed. The work itself, as well as signaling and waiting on the semaphores are all queue operations.

Queue operations on different queues have no implicit ordering constraints, and may execute in any order. Explicit ordering constraints between queues can be expressed with semaphores and fences.

Command buffer submissions to a single queue respect submission order and other implicit ordering guarantees, but otherwise may overlap or execute out of order. Other types of batches and queue submissions against a single queue (e.g. sparse memory binding) have no implicit ordering constraints with any other queue submission or batch. Additional explicit ordering constraints between queue submissions and individual batches can be expressed with semaphores and fences.

Before a fence or semaphore is signaled, it is guaranteed that any previously submitted queue operations have completed execution, and that memory writes from those queue operations are available to future queue operations. Waiting on a signaled semaphore or fence guarantees that previous writes that are available are also visible to subsequent commands.

Command buffer boundaries, both between primary command buffers of the same or different batches or submissions as well as between primary and secondary command buffers, do not introduce any additional ordering constraints. In other words, submitting the set of command buffers (which can include executing secondary command buffers) between any semaphore or fence operations execute the recorded commands as if they had all been recorded into a single primary command buffer, except that the current state is reset on each boundary. Explicit ordering constraints can be expressed with explicit synchronization primitives.

There are a few implicit ordering guarantees between commands within a command buffer, but only covering a subset of execution. Additional explicit ordering constraints can be expressed with the various explicit synchronization primitives.

Note

Implementations have significant freedom to overlap execution of work submitted to a queue, and this is common due to deep pipelining and parallelism in Vulkan devices.

Commands recorded in command buffers either perform actions (draw, dispatch, clear, copy, query/timestamp operations, begin/end subpass operations), set state (bind pipelines, descriptor sets, and buffers, set dynamic state, push constants, set render pass/subpass state), or perform synchronization (set/wait events, pipeline barrier, render pass/subpass dependencies). Some commands perform more than one of these tasks. State setting commands update the current state of the command buffer. Some commands that perform actions (e.g. draw/dispatch) do so based on the current state set cumulatively since the start of the command buffer. The work involved in performing action commands is often allowed to overlap or to be reordered, but doing so must not alter the state to be used by each action command. In general, action commands are those commands that alter framebuffer attachments, read/write buffer or image memory, or write to query pools.

Synchronization commands introduce explicit execution and memory dependencies between two sets of action commands, where the second set of commands depends on the first set of commands. These dependencies enforce that both the execution of certain pipeline stages in the later set occur after the execution of certain stages in the source set, and that the effects of memory accesses performed by certain pipeline stages occur in order and are visible to each other. When not enforced by an explicit dependency or implicit ordering guarantees, action commands may overlap execution or execute out of order, and may not see the side effects of each other’s memory accesses.

The device executes queue operations asynchronously with respect to the host. Control is returned to an application immediately following command buffer submission to a queue. The application must synchronize work between the host and device as needed.

2.3. Object Model

The devices, queues, and other entities in Vulkan are represented by Vulkan objects. At the API level, all objects are referred to by handles. There are two classes of handles, dispatchable and non-dispatchable. Dispatchable handle types are a pointer to an opaque type. This pointer may be used by layers as part of intercepting API commands, and thus each API command takes a dispatchable type as its first parameter. Each object of a dispatchable type must have a unique handle value during its lifetime.

Non-dispatchable handle types are a 64-bit integer type whose meaning is implementation-dependent, and may encode object information directly in the handle rather than pointing to a software structure. Objects of a non-dispatchable type may not have unique handle values within a type or across types. If handle values are not unique, then destroying one such handle must not cause identical handles of other types to become invalid, and must not cause identical handles of the same type to become invalid if that handle value has been created more times than it has been destroyed.

All objects created or allocated from a VkDevice (i.e. with a VkDevice as the first parameter) are private to that device, and must not be used on other devices.

2.3.1. Object Lifetime

Objects are created or allocated by vkCreate* and vkAllocate* commands, respectively. Once an object is created or allocated, its “structure” is considered to be immutable, though the contents of certain object types is still free to change. Objects are destroyed or freed by vkDestroy* and vkFree* commands, respectively.

Objects that are allocated (rather than created) take resources from an existing pool object or memory heap, and when freed return resources to that pool or heap. While object creation and destruction are generally expected to be low-frequency occurrences during runtime, allocating and freeing objects can occur at high frequency. Pool objects help accommodate improved performance of the allocations and frees.

It is an application’s responsibility to track the lifetime of Vulkan objects, and not to destroy them while they are still in use.

Application-owned memory is immediately consumed by any Vulkan command it is passed into. The application can alter or free this memory as soon as the commands that consume it have returned.

The following object types are consumed when they are passed into a Vulkan command and not further accessed by the objects they are used to create. They must not be destroyed in the duration of any API command they are passed into:

  • VkShaderModule

  • VkPipelineCache

A VkPipelineLayout object must not be destroyed while any command buffer that uses it is in the recording state.

VkDescriptorSetLayout objects may be accessed by commands that operate on descriptor sets allocated using that layout, and those descriptor sets must not be updated with vkUpdateDescriptorSets after the descriptor set layout has been destroyed. Otherwise, descriptor set layouts can be destroyed any time they are not in use by an API command.

The application must not destroy any other type of Vulkan object until all uses of that object by the device (such as via command buffer execution) have completed.

The following Vulkan objects must not be destroyed while any command buffers using the object are in the pending state:

  • VkEvent

  • VkQueryPool

  • VkBuffer

  • VkBufferView

  • VkImage

  • VkImageView

  • VkPipeline

  • VkSampler

  • VkDescriptorPool

  • VkFramebuffer

  • VkRenderPass

  • VkCommandBuffer

  • VkCommandPool

  • VkDeviceMemory

  • VkDescriptorSet

  • VkObjectTableNVX

  • VkIndirectCommandsLayout

Destroying these objects will move any command buffers that are in the recording or executable state, and are using those objects, to the invalid state.

The following Vulkan objects must not be destroyed while any queue is executing commands that use the object:

  • VkFence

  • VkSemaphore

  • VkCommandBuffer

  • VkCommandPool

In general, objects can be destroyed or freed in any order, even if the object being freed is involved in the use of another object (e.g. use of a resource in a view, use of a view in a descriptor set, use of an object in a command buffer, binding of a memory allocation to a resource), as long as any object that uses the freed object is not further used in any way except to be destroyed or to be reset in such a way that it no longer uses the other object (such as resetting a command buffer). If the object has been reset, then it can be used as if it never used the freed object. An exception to this is when there is a parent/child relationship between objects. In this case, the application must not destroy a parent object before its children, except when the parent is explicitly defined to free its children when it is destroyed (e.g. for pool objects, as defined below).

VkCommandPool objects are parents of VkCommandBuffer objects. VkDescriptorPool objects are parents of VkDescriptorSet objects. VkDevice objects are parents of many object types (all that take a VkDevice as a parameter to their creation).

The following Vulkan objects have specific restrictions for when they can be destroyed:

  • VkQueue objects cannot be explicitly destroyed. Instead, they are implicitly destroyed when the VkDevice object they are retrieved from is destroyed.

  • Destroying a pool object implicitly frees all objects allocated from that pool. Specifically, destroying VkCommandPool frees all VkCommandBuffer objects that were allocated from it, and destroying VkDescriptorPool frees all VkDescriptorSet objects that were allocated from it.

  • VkDevice objects can be destroyed when all VkQueue objects retrieved from them are idle, and all objects created from them have been destroyed. This includes the following objects:

    • VkFence

    • VkSemaphore

    • VkEvent

    • VkQueryPool

    • VkBuffer

    • VkBufferView

    • VkImage

    • VkImageView

    • VkShaderModule

    • VkPipelineCache

    • VkPipeline

    • VkPipelineLayout

    • VkSampler

    • VkDescriptorSetLayout

    • VkDescriptorPool

    • VkFramebuffer

    • VkRenderPass

    • VkCommandPool

    • VkCommandBuffer

    • VkDeviceMemory

  • VkPhysicalDevice objects cannot be explicitly destroyed. Instead, they are implicitly destroyed when the VkInstance object they are retrieved from is destroyed.

  • VkInstance objects can be destroyed once all VkDevice objects created from any of its VkPhysicalDevice objects have been destroyed.

2.3.2. External Object Handles

As defined above, the scope of object handles created or allocated from a VkDevice is limited to that logical device. Objects which are not in scope are said to be external. To bring an external object into scope, an external handle must be exported from the object in the source scope and imported into the destination scope.

Note

The scope of external handles and their associated resources may vary according to their type, but they can generally be shared across process and API boundaries.

2.4. Command Syntax and Duration

The Specification describes Vulkan commands as functions or procedures using C99 syntax. Language bindings for other languages such as C++ and JavaScript may allow for stricter parameter passing, or object-oriented interfaces.

Vulkan uses the standard C types for the base type of scalar parameters (e.g. types from stdint.h), with exceptions described below, or elsewhere in the text when appropriate:

VkBool32 represents boolean True and False values, since C does not have a sufficiently portable built-in boolean type:

typedef uint32_t VkBool32;

VK_TRUE represents a boolean True (integer 1) value, and VK_FALSE a boolean False (integer 0) value.

All values returned from a Vulkan implementation in a VkBool32 will be either VK_TRUE or VK_FALSE.

Applications must not pass any other values than VK_TRUE or VK_FALSE into a Vulkan implementation where a VkBool32 is expected.

VkDeviceSize represents device memory size and offset values:

typedef uint64_t VkDeviceSize;

Commands that create Vulkan objects are of the form vkCreate* and take Vk*CreateInfo structures with the parameters needed to create the object. These Vulkan objects are destroyed with commands of the form vkDestroy*.

The last in-parameter to each command that creates or destroys a Vulkan object is pAllocator. The pAllocator parameter can be set to a non-NULL value such that allocations for the given object are delegated to an application provided callback; refer to the Memory Allocation chapter for further details.

Commands that allocate Vulkan objects owned by pool objects are of the form vkAllocate*, and take Vk*AllocateInfo structures. These Vulkan objects are freed with commands of the form vkFree*. These objects do not take allocators; if host memory is needed, they will use the allocator that was specified when their parent pool was created.

Commands are recorded into a command buffer by calling API commands of the form vkCmd*. Each such command may have different restrictions on where it can be used: in a primary and/or secondary command buffer, inside and/or outside a render pass, and in one or more of the supported queue types. These restrictions are documented together with the definition of each such command.

The duration of a Vulkan command refers to the interval between calling the command and its return to the caller.

2.4.1. Lifetime of Retrieved Results

Information is retrieved from the implementation with commands of the form vkGet* and vkEnumerate*.

Unless otherwise specified for an individual command, the results are invariant; that is, they will remain unchanged when retrieved again by calling the same command with the same parameters, so long as those parameters themselves all remain valid.

2.5. Threading Behavior

Vulkan is intended to provide scalable performance when used on multiple host threads. All commands support being called concurrently from multiple threads, but certain parameters, or components of parameters are defined to be externally synchronized. This means that the caller must guarantee that no more than one thread is using such a parameter at a given time.

More precisely, Vulkan commands use simple stores to update software structures representing Vulkan objects. A parameter declared as externally synchronized may have its software structures updated at any time during the host execution of the command. If two commands operate on the same object and at least one of the commands declares the object to be externally synchronized, then the caller must guarantee not only that the commands do not execute simultaneously, but also that the two commands are separated by an appropriate memory barrier (if needed).

Note

Memory barriers are particularly relevant on the ARM CPU architecture which is more weakly ordered than many developers are accustomed to from x86/x64 programming. Fortunately, most higher-level synchronization primitives (like the pthread library) perform memory barriers as a part of mutual exclusion, so mutexing Vulkan objects via these primitives will have the desired effect.

Many object types are immutable, meaning the objects cannot change once they have been created. These types of objects never need external synchronization, except that they must not be destroyed while they are in use on another thread. In certain special cases, mutable object parameters are internally synchronized such that they do not require external synchronization. One example of this is the use of a VkPipelineCache in vkCreateGraphicsPipelines and vkCreateComputePipelines, where external synchronization around such a heavyweight command would be impractical. The implementation must internally synchronize the cache in this example, and may be able to do so in the form of a much finer-grained mutex around the command. Any command parameters that are not labeled as externally synchronized are either not mutated by the command or are internally synchronized. Additionally, certain objects related to a command’s parameters (e.g. command pools and descriptor pools) may be affected by a command, and must also be externally synchronized. These implicit parameters are documented as described below.

Parameters of commands that are externally synchronized are listed below.

Externally Synchronized Parameters

There are also a few instances where a command can take in a user allocated list whose contents are externally synchronized parameters. In these cases, the caller must guarantee that at most one thread is using a given element within the list at a given time. These parameters are listed below.

Externally Synchronized Parameter Lists
  • Each element of the pWaitSemaphores member of each element of the pSubmits parameter in vkQueueSubmit

  • Each element of the pSignalSemaphores member of each element of the pSubmits parameter in vkQueueSubmit

  • Each element of the pWaitSemaphores member of each element of the pBindInfo parameter in vkQueueBindSparse

  • Each element of the pSignalSemaphores member of each element of the pBindInfo parameter in vkQueueBindSparse

  • The buffer member of each element of the pBufferBinds member of each element of the pBindInfo parameter in vkQueueBindSparse

  • The image member of each element of the pImageOpaqueBinds member of each element of the pBindInfo parameter in vkQueueBindSparse

  • The image member of each element of the pImageBinds member of each element of the pBindInfo parameter in vkQueueBindSparse

  • Each element of the pFences parameter in vkResetFences

  • Each element of the pDescriptorSets parameter in vkFreeDescriptorSets

  • The dstSet member of each element of the pDescriptorWrites parameter in vkUpdateDescriptorSets

  • The dstSet member of each element of the pDescriptorCopies parameter in vkUpdateDescriptorSets

  • Each element of the pCommandBuffers parameter in vkFreeCommandBuffers

  • Each element of the pWaitSemaphores member of the pPresentInfo parameter in vkQueuePresentKHR

  • Each element of the pSwapchains member of the pPresentInfo parameter in vkQueuePresentKHR

  • The surface member of each element of the pCreateInfos parameter in vkCreateSharedSwapchainsKHR

  • The oldSwapchain member of each element of the pCreateInfos parameter in vkCreateSharedSwapchainsKHR

In addition, there are some implicit parameters that need to be externally synchronized. For example, all commandBuffer parameters that need to be externally synchronized imply that the commandPool that was passed in when creating that command buffer also needs to be externally synchronized. The implicit parameters and their associated object are listed below.

Implicit Externally Synchronized Parameters

2.6. Errors

Vulkan is a layered API. The lowest layer is the core Vulkan layer, as defined by this Specification. The application can use additional layers above the core for debugging, validation, and other purposes.

One of the core principles of Vulkan is that building and submitting command buffers should be highly efficient. Thus error checking and validation of state in the core layer is minimal, although more rigorous validation can be enabled through the use of layers.

The core layer assumes applications are using the API correctly. Except as documented elsewhere in the Specification, the behavior of the core layer to an application using the API incorrectly is undefined, and may include program termination. However, implementations must ensure that incorrect usage by an application does not affect the integrity of the operating system, the Vulkan implementation, or other Vulkan client applications in the system, and does not allow one application to access data belonging to another application. Applications can request stronger robustness guarantees by enabling the robustBufferAccess feature as described in Features, Limits, and Formats.

Validation of correct API usage is left to validation layers. Applications should be developed with validation layers enabled, to help catch and eliminate errors. Once validated, released applications should not enable validation layers by default.

2.6.1. Valid Usage

Valid usage defines a set of conditions which must be met in order to achieve well-defined run-time behavior in an application. These conditions depend only on Vulkan state, and the parameters or objects whose usage is constrained by the condition.

Some valid usage conditions have dependencies on run-time limits or feature availability. It is possible to validate these conditions against Vulkan’s minimum supported values for these limits and features, or some subset of other known values.

Valid usage conditions do not cover conditions where well-defined behavior (including returning an error code) exists.

Valid usage conditions should apply to the command or structure where complete information about the condition would be known during execution of an application. This is such that a validation layer or linter can be written directly against these statements at the point they are specified.

Note

This does lead to some non-obvious places for valid usage statements. For instance, the valid values for a structure might depend on a separate value in the calling command. In this case, the structure itself will not reference this valid usage as it is impossible to determine validity from the structure that it is invalid - instead this valid usage would be attached to the calling command.

Another example is draw state - the state setters are independent, and can cause a legitimately invalid state configuration between draw calls; so the valid usage statements are attached to the place where all state needs to be valid - at the draw command.

Valid usage conditions are described in a block labelled “Valid Usage” following each command or structure they apply to.

2.6.2. Implicit Valid Usage

Some valid usage conditions apply to all commands and structures in the API, unless explicitly denoted otherwise for a specific command or structure. These conditions are considered implicit, and are described in a block labelled “Valid Usage (Implicit)” following each command or structure they apply to. Implicit valid usage conditions are described in detail below.

Valid Usage for Object Handles

Any input parameter to a command that is an object handle must be a valid object handle, unless otherwise specified. An object handle is valid if:

  • It has been created or allocated by a previous, successful call to the API. Such calls are noted in the specification.

  • It has not been deleted or freed by a previous call to the API. Such calls are noted in the specification.

  • Any objects used by that object, either as part of creation or execution, must also be valid.

The reserved values VK_NULL_HANDLE and NULL can be used in place of valid non-dispatchable handles and dispatchable handles, respectively, when explicitly called out in the specification. Any command that creates an object successfully must not return these values. It is valid to pass these values to vkDestroy* or vkFree* commands, which will silently ignore these values.

Valid Usage for Pointers

Any parameter that is a pointer must be a valid pointer. A pointer is valid if it points at memory containing values of the number and type(s) expected by the command, and all fundamental types accessed through the pointer (e.g. as elements of an array or as members of a structure) satisfy the alignment requirements of the host processor.

Valid Usage for Enumerated Types

Any parameter of an enumerated type must be a valid enumerant for that type. A enumerant is valid if:

  • The enumerant is defined as part of the enumerated type.

  • The enumerant is not one of the special values defined for the enumerated type, which are suffixed with _BEGIN_RANGE, _END_RANGE, _RANGE_SIZE or _MAX_ENUM.

Any enumerated type returned from a query command or otherwise output from Vulkan to the application must not have a reserved value. Reserved values are values not defined by any extension for that enumerated type.

Note

This language is intended to accomodate cases such as “hidden” extensions known only to driver internals, or layers enabling extensions without knowledge of the application, without allowing return of values not defined by any extension.

Valid Usage for Flags

A collection of flags is represented by a bitmask using the type VkFlags:

typedef uint32_t VkFlags;

Bitmasks are passed to many commands and structures to compactly represent options, but VkFlags is not used directly in the API. Instead, a Vk*Flags type which is an alias of VkFlags, and whose name matches the corresponding Vk*FlagBits that are valid for that type, is used. These aliases are described in the Flag Types appendix of the Specification.

Any Vk*Flags member or parameter used in the API as an input must be a valid combination of bit flags. A valid combination is either zero or the bitwise OR of valid bit flags. A bit flag is valid if:

  • The bit flag is defined as part of the Vk*FlagBits type, where the bits type is obtained by taking the flag type and replacing the trailing Flags with FlagBits. For example, a flag value of type VkColorComponentFlags must contain only bit flags defined by VkColorComponentFlagBits.

  • The flag is allowed in the context in which it is being used. For example, in some cases, certain bit flags or combinations of bit flags are mutually exclusive.

Any Vk*Flags member or parameter returned from a query command or otherwise output from Vulkan to the application may contain bit flags undefined in its corresponding Vk*FlagBits type. An application cannot rely on the state of these unspecified bits.

Valid Usage for Structure Types

Any parameter that is a structure containing a sType member must have a value of sType which is a valid VkStructureType value matching the type of the structure. As a general rule, the name of this value is obtained by taking the structure name, stripping the leading Vk, prefixing each capital letter with _, converting the entire resulting string to upper case, and prefixing it with VK_STRUCTURE_TYPE_. For example, structures of type VkImageCreateInfo must have a sType value of VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO.

The values VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO and VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO are reserved for internal use by the loader, and do not have corresponding Vulkan structures in this specification.

The list of supported structure types is defined in an appendix.

Valid Usage for Structure Pointer Chains

Any parameter that is a structure containing a void* pNext member must have a value of pNext that is either NULL, or points to a valid structure defined by an extension, containing sType and pNext members as described in the Vulkan Documentation and Extensions document in the section “Extension Interactions”. The set of structures connected by pNext pointers is referred to as a pNext chain. If that extension is supported by the implementation, then it must be enabled.

Each type of valid structure must not appear more than once in a pNext chain.

Any component of the implementation (the loader, any enabled layers, and drivers) must skip over, without processing (other than reading the sType and pNext members) any structures in the chain with sType values not defined by extensions supported by that component.

Extension structures are not described in the base Vulkan specification, but either in layered specifications incorporating those extensions, or in separate vendor-provided documents.

Valid Usage for Nested Structures

The above conditions also apply recursively to members of structures provided as input to a command, either as a direct argument to the command, or themselves a member of another structure.

Specifics on valid usage of each command are covered in their individual sections.

2.6.3. Return Codes

While the core Vulkan API is not designed to capture incorrect usage, some circumstances still require return codes. Commands in Vulkan return their status via return codes that are in one of two categories:

  • Successful completion codes are returned when a command needs to communicate success or status information. All successful completion codes are non-negative values.

  • Run time error codes are returned when a command needs to communicate a failure that could only be detected at run time. All run time error codes are negative values.

All return codes in Vulkan are reported via VkResult return values. The possible codes are:

typedef enum VkResult {
    VK_SUCCESS = 0,
    VK_NOT_READY = 1,
    VK_TIMEOUT = 2,
    VK_EVENT_SET = 3,
    VK_EVENT_RESET = 4,
    VK_INCOMPLETE = 5,
    VK_ERROR_OUT_OF_HOST_MEMORY = -1,
    VK_ERROR_OUT_OF_DEVICE_MEMORY = -2,
    VK_ERROR_INITIALIZATION_FAILED = -3,
    VK_ERROR_DEVICE_LOST = -4,
    VK_ERROR_MEMORY_MAP_FAILED = -5,
    VK_ERROR_LAYER_NOT_PRESENT = -6,
    VK_ERROR_EXTENSION_NOT_PRESENT = -7,
    VK_ERROR_FEATURE_NOT_PRESENT = -8,
    VK_ERROR_INCOMPATIBLE_DRIVER = -9,
    VK_ERROR_TOO_MANY_OBJECTS = -10,
    VK_ERROR_FORMAT_NOT_SUPPORTED = -11,
    VK_ERROR_FRAGMENTED_POOL = -12,
    VK_ERROR_SURFACE_LOST_KHR = -1000000000,
    VK_ERROR_NATIVE_WINDOW_IN_USE_KHR = -1000000001,
    VK_SUBOPTIMAL_KHR = 1000001003,
    VK_ERROR_OUT_OF_DATE_KHR = -1000001004,
    VK_ERROR_INCOMPATIBLE_DISPLAY_KHR = -1000003001,
    VK_ERROR_VALIDATION_FAILED_EXT = -1000011001,
    VK_ERROR_INVALID_SHADER_NV = -1000012000,
    VK_ERROR_OUT_OF_POOL_MEMORY_KHR = -1000069000,
    VK_ERROR_INVALID_EXTERNAL_HANDLE_KHX = -1000072003,
} VkResult;
Success Codes
  • VK_SUCCESS Command successfully completed

  • VK_NOT_READY A fence or query has not yet completed

  • VK_TIMEOUT A wait operation has not completed in the specified time

  • VK_EVENT_SET An event is signaled

  • VK_EVENT_RESET An event is unsignaled

  • VK_INCOMPLETE A return array was too small for the result

  • VK_SUBOPTIMAL_KHR A swapchain no longer matches the surface properties exactly, but can still be used to present to the surface successfully.

Error codes
  • VK_ERROR_OUT_OF_HOST_MEMORY A host memory allocation has failed.

  • VK_ERROR_OUT_OF_DEVICE_MEMORY A device memory allocation has failed.

  • VK_ERROR_INITIALIZATION_FAILED Initialization of an object could not be completed for implementation-specific reasons.

  • VK_ERROR_DEVICE_LOST The logical or physical device has been lost. See Lost Device

  • VK_ERROR_MEMORY_MAP_FAILED Mapping of a memory object has failed.

  • VK_ERROR_LAYER_NOT_PRESENT A requested layer is not present or could not be loaded.

  • VK_ERROR_EXTENSION_NOT_PRESENT A requested extension is not supported.

  • VK_ERROR_FEATURE_NOT_PRESENT A requested feature is not supported.

  • VK_ERROR_INCOMPATIBLE_DRIVER The requested version of Vulkan is not supported by the driver or is otherwise incompatible for implementation-specific reasons.

  • VK_ERROR_TOO_MANY_OBJECTS Too many objects of the type have already been created.

  • VK_ERROR_FORMAT_NOT_SUPPORTED A requested format is not supported on this device.

  • VK_ERROR_FRAGMENTED_POOL A pool allocation has failed due to fragmentation of the pool’s memory. This must only be returned if no attempt to allocate host or device memory was made to accomodate the new allocation. This should be returned in preference to VK_ERROR_OUT_OF_POOL_MEMORY_KHR, but only if the implementation is certain that the pool allocation failure was due to fragmentation.

  • VK_ERROR_SURFACE_LOST_KHR A surface is no longer available.

  • VK_ERROR_NATIVE_WINDOW_IN_USE_KHR The requested window is already in use by Vulkan or another API in a manner which prevents it from being used again.

  • VK_ERROR_OUT_OF_DATE_KHR A surface has changed in such a way that it is no longer compatible with the swapchain, and further presentation requests using the swapchain will fail. Applications must query the new surface properties and recreate their swapchain if they wish to continue presenting to the surface.

  • VK_ERROR_INCOMPATIBLE_DISPLAY_KHR The display used by a swapchain does not use the same presentable image layout, or is incompatible in a way that prevents sharing an image.

  • VK_ERROR_INVALID_SHADER_NV One or more shaders failed to compile or link. More details are reported back to the application via VK_EXT_debug_report if enabled.

  • VK_ERROR_OUT_OF_POOL_MEMORY_KHR A pool memory allocation has failed. This must only be returned if no attempt to allocate host or device memory was made to accomodate the new allocation. If the failure was definitely due to fragmentation of the pool, VK_ERROR_FRAGMENTED_POOL should be returned instead.

  • VK_ERROR_INVALID_EXTERNAL_HANDLE_KHX An external handle is not a valid handle of the specified type.

If a command returns a run time error, it will leave any result pointers unmodified, unless other behavior is explicitly defined in the specification.

Out of memory errors do not damage any currently existing Vulkan objects. Objects that have already been successfully created can still be used by the application.

Performance-critical commands generally do not have return codes. If a run time error occurs in such commands, the implementation will defer reporting the error until a specified point. For commands that record into command buffers (vkCmd*) run time errors are reported by vkEndCommandBuffer.

2.7. Numeric Representation and Computation

Implementations normally perform computations in floating-point, and must meet the range and precision requirements defined under “Floating-Point Computation” below.

These requirements only apply to computations performed in Vulkan operations outside of shader execution, such as texture image specification and sampling, and per-fragment operations. Range and precision requirements during shader execution differ and are specified by the Precision and Operation of SPIR-V Instructions section.

In some cases, the representation and/or precision of operations is implicitly limited by the specified format of vertex or texel data consumed by Vulkan. Specific floating-point formats are described later in this section.

2.7.1. Floating-Point Computation

Most floating-point computation is performed in SPIR-V shader modules. The properties of computation within shaders are constrained as defined by the Precision and Operation of SPIR-V Instructions section.

Some floating-point computation is performed outside of shaders, such as viewport and depth range calculations. For these computations, we do not specify how floating-point numbers are to be represented, or the details of how operations on them are performed, but only place minimal requirements on representation and precision as described in the remainder of this section.

We require simply that numbers' floating-point parts contain enough bits and that their exponent fields are large enough so that individual results of floating-point operations are accurate to about 1 part in 105. The maximum representable magnitude for all floating-point values must be at least 232.

x × 0 = 0 × x = 0 for any non-infinite and non-NaN x.

1 × x = x × 1 = x.

x + 0 = 0 + x = x.

00 = 1.

Occasionally, further requirements will be specified. Most single-precision floating-point formats meet these requirements.

The special values Inf and -Inf encode values with magnitudes too large to be represented; the special value NaN encodes “Not A Number” values resulting from undefined arithmetic operations such as 0 / 0. Implementations may support Inf and NaN in their floating-point computations.

Any representable floating-point value is legal as input to a Vulkan command that requires floating-point data. The result of providing a value that is not a floating-point number to such a command is unspecified, but must not lead to Vulkan interruption or termination. In IEEE 754 arithmetic, for example, providing a negative zero or a denormalized number to an Vulkan command must yield deterministic results, while providing a NaN or Inf yields unspecified results.

2.7.2. 16-Bit Floating-Point Numbers

16-bit floating point numbers are defined in the “16-bit floating point numbers” section of the Khronos Data Format Specification.

Any representable 16-bit floating-point value is legal as input to a Vulkan command that accepts 16-bit floating-point data. The result of providing a value that is not a floating-point number (such as Inf or NaN) to such a command is unspecified, but must not lead to Vulkan interruption or termination. Providing a denormalized number or negative zero to Vulkan must yield deterministic results.

2.7.3. Unsigned 11-Bit Floating-Point Numbers

Unsigned 11-bit floating point numbers are defined in the “Unsigned 11-bit floating point numbers” section of the Khronos Data Format Specification.

When a floating-point value is converted to an unsigned 11-bit floating-point representation, finite values are rounded to the closest representable finite value.

While less accurate, implementations are allowed to always round in the direction of zero. This means negative values are converted to zero. Likewise, finite positive values greater than 65024 (the maximum finite representable unsigned 11-bit floating-point value) are converted to 65024. Additionally: negative infinity is converted to zero; positive infinity is converted to positive infinity; and both positive and negative NaN are converted to positive NaN.

Any representable unsigned 11-bit floating-point value is legal as input to a Vulkan command that accepts 11-bit floating-point data. The result of providing a value that is not a floating-point number (such as Inf or NaN) to such a command is unspecified, but must not lead to Vulkan interruption or termination. Providing a denormalized number to Vulkan must yield deterministic results.

2.7.4. Unsigned 10-Bit Floating-Point Numbers

Unsigned 10-bit floating point numbers are defined in the “Unsigned 10-bit floating point numbers” section of the Khronos Data Format Specification.

When a floating-point value is converted to an unsigned 10-bit floating-point representation, finite values are rounded to the closest representable finite value.

While less accurate, implementations are allowed to always round in the direction of zero. This means negative values are converted to zero. Likewise, finite positive values greater than 64512 (the maximum finite representable unsigned 10-bit floating-point value) are converted to 64512. Additionally: negative infinity is converted to zero; positive infinity is converted to positive infinity; and both positive and negative NaN are converted to positive NaN.

Any representable unsigned 10-bit floating-point value is legal as input to a Vulkan command that accepts 10-bit floating-point data. The result of providing a value that is not a floating-point number (such as Inf or NaN) to such a command is unspecified, but must not lead to Vulkan interruption or termination. Providing a denormalized number to Vulkan must yield deterministic results.

2.7.5. General Requirements

Some calculations require division. In such cases (including implied divisions performed by vector normalization), division by zero produces an unspecified result but must not lead to Vulkan interruption or termination.

2.8. Fixed-Point Data Conversions

When generic vertex attributes and pixel color or depth components are represented as integers, they are often (but not always) considered to be normalized. Normalized integer values are treated specially when being converted to and from floating-point values, and are usually referred to as normalized fixed-point.

In the remainder of this section, b denotes the bit width of the fixed-point integer representation. When the integer is one of the types defined by the API, b is the bit width of that type. When the integer comes from an image containing color or depth component texels, b is the number of bits allocated to that component in its specified image format.

The signed and unsigned fixed-point representations are assumed to be b-bit binary two’s-complement integers and binary unsigned integers, respectively.

2.8.1. Conversion from Normalized Fixed-Point to Floating-Point

Unsigned normalized fixed-point integers represent numbers in the range [0,1]. The conversion from an unsigned normalized fixed-point value c to the corresponding floating-point value f is defined as

\[f = { c \over { 2^b - 1 } }\]

Signed normalized fixed-point integers represent numbers in the range [-1,1]. The conversion from a signed normalized fixed-point value c to the corresponding floating-point value f is performed using

\[f = \max\left( {c \over {2^{b-1} - 1}}, -1.0 \right)\]

Only the range [-2b-1 + 1, 2b-1 - 1] is used to represent signed fixed-point values in the range [-1,1]. For example, if b = 8, then the integer value -127 corresponds to -1.0 and the value 127 corresponds to 1.0. Note that while zero is exactly expressible in this representation, one value (-128 in the example) is outside the representable range, and must be clamped before use. This equation is used everywhere that signed normalized fixed-point values are converted to floating-point.

2.8.2. Conversion from Floating-Point to Normalized Fixed-Point

The conversion from a floating-point value f to the corresponding unsigned normalized fixed-point value c is defined by first clamping f to the range [0,1], then computing

c = convertFloatToUint(f × (2b - 1), b)

where convertFloatToUint}(r,b) returns one of the two unsigned binary integer values with exactly b bits which are closest to the floating-point value r. Implementations should round to nearest. If r is equal to an integer, then that integer value must be returned. In particular, if f is equal to 0.0 or 1.0, then c must be assigned 0 or 2b - 1, respectively.

The conversion from a floating-point value f to the corresponding signed normalized fixed-point value c is performed by clamping f to the range [-1,1], then computing

c = convertFloatToInt(f × (2b-1 - 1), b)

where convertFloatToInt(r,b) returns one of the two signed two’s-complement binary integer values with exactly b bits which are closest to the floating-point value r. Implementations should round to nearest. If r is equal to an integer, then that integer value must be returned. In particular, if f is equal to -1.0, 0.0, or 1.0, then c must be assigned -(2b-1 - 1), 0, or 2b-1 - 1, respectively.

This equation is used everywhere that floating-point values are converted to signed normalized fixed-point.

2.9. API Version Numbers and Semantics

The Vulkan version number is used in several places in the API. In each such use, the API major version number, minor version number, and patch version number are packed into a 32-bit integer as follows:

  • The major version number is a 10-bit integer packed into bits 31-22.

  • The minor version number is a 10-bit integer packed into bits 21-12.

  • The patch version number is a 12-bit integer packed into bits 11-0.

Differences in any of the Vulkan version numbers indicates a change to the API in some way, with each part of the version number indicating a different scope of changes.

A difference in patch version numbers indicates that some usually small part of the specification or header has been modified, typically to fix a bug, and may have an impact on the behavior of existing functionality. Differences in this version number should not affect either full compatibility or backwards compatibility between two versions, or add additional interfaces to the API.

A difference in minor version numbers indicates that some amount of new functionality has been added. This will usually include new interfaces in the header, and may also include behavior changes and bug fixes. Functionality may be deprecated in a minor revision, but will not be removed. When a new minor version is introduced, the patch version is reset to 0, and each minor revision maintains its own set of patch versions. Differences in this version should not affect backwards compatibility, but will affect full compatibility.

A difference in major version numbers indicates a large set of changes to the API, potentially including new functionality and header interfaces, behavioral changes, removal of deprecated features, modification or outright replacement of any feature, and is thus very likely to break any and all compatibility. Differences in this version will typically require significant modification to an application in order for it to function.

C language macros for manipulating version numbers are defined in the Version Number Macros appendix.

2.10. Common Object Types

Some types of Vulkan objects are used in many different structures and command parameters, and are described here. These types include offsets, extents, and rectangles.

2.10.1. Offsets

Offsets are used to describe a pixel location within an image or framebuffer, as an (x,y) location for two-dimensional images, or an (x,y,z) location for three-dimensional images.

A two-dimensional offsets is defined by the structure:

typedef struct VkOffset2D {
    int32_t    x;
    int32_t    y;
} VkOffset2D;

A three-dimensional offset is defined by the structure:

typedef struct VkOffset3D {
    int32_t    x;
    int32_t    y;
    int32_t    z;
} VkOffset3D;

2.10.2. Extents

Extents are used to describe the size of a rectangular region of pixels within an image or framebuffer, as (width,height) for two-dimensional images, or as (width,height,depth) for three-dimensional images.

A two-dimensional extent is defined by the structure:

typedef struct VkExtent2D {
    uint32_t    width;
    uint32_t    height;
} VkExtent2D;

A three-dimensional extent is defined by the structure:

typedef struct VkExtent3D {
    uint32_t    width;
    uint32_t    height;
    uint32_t    depth;
} VkExtent3D;

2.10.3. Rectangles

Rectangles are used to describe a specified rectangular region of pixels within an image or framebuffer. Rectangles include both an offset and an extent of the same dimensionality, as described above. Two-dimensional rectangles are defined by the structure

typedef struct VkRect2D {
    VkOffset2D    offset;
    VkExtent2D    extent;
} VkRect2D;

3. Initialization

Before using Vulkan, an application must initialize it by loading the Vulkan commands, and creating a VkInstance object.

3.1. Command Function Pointers

Vulkan commands are not necessarily exposed statically on a platform. Function pointers for all Vulkan commands can be obtained with the command:

PFN_vkVoidFunction vkGetInstanceProcAddr(
    VkInstance                                  instance,
    const char*                                 pName);
  • instance is the instance that the function pointer will be compatible with, or NULL for commands not dependent on any instance.

  • pName is the name of the command to obtain.

vkGetInstanceProcAddr itself is obtained in a platform- and loader- specific manner. Typically, the loader library will export this command as a function symbol, so applications can link against the loader library, or load it dynamically and look up the symbol using platform-specific APIs. Loaders are encouraged to export function symbols for all other core Vulkan commands as well; if this is done, then applications that use only the core Vulkan commands have no need to use vkGetInstanceProcAddr.

The table below defines the various use cases for vkGetInstanceProcAddr and expected return value ("fp" is function pointer) for each case.

The returned function pointer is of type PFN_vkVoidFunction, and must be cast to the type of the command being queried.

Table 1. vkGetInstanceProcAddr behavior
instance pName return value

*

NULL

undefined

invalid instance

*

undefined

NULL

vkEnumerateInstanceExtensionProperties

fp

NULL

vkEnumerateInstanceLayerProperties

fp

NULL

vkCreateInstance

fp

NULL

* (any pName not covered above)

NULL

instance

core Vulkan command

fp1

instance

enabled instance extension commands for instance

fp1

instance

available device extension2 commands for instance

fp1

instance

* (any pName not covered above)

NULL

1

The returned function pointer must only be called with a dispatchable object (the first parameter) that is instance or a child of instance. e.g. VkInstance, VkPhysicalDevice, VkDevice, VkQueue, or VkCommandBuffer.

2

An “available extension” is an extension function supported by any of the loader, driver or layer.

Valid Usage (Implicit)
  • If instance is not NULL, instance must be a valid VkInstance handle

  • pName must be a null-terminated string

In order to support systems with multiple Vulkan implementations comprising heterogeneous collections of hardware and software, the function pointers returned by vkGetInstanceProcAddr may point to dispatch code, which calls a different real implementation for different VkDevice objects (and objects created from them). The overhead of this internal dispatch can be avoided by obtaining device-specific function pointers for any commands that use a device or device-child object as their dispatchable object. Such function pointers can be obtained with the command:

PFN_vkVoidFunction vkGetDeviceProcAddr(
    VkDevice                                    device,
    const char*                                 pName);

The table below defines the various use cases for vkGetDeviceProcAddr and expected return value for each case.

The returned function pointer is of type PFN_vkVoidFunction, and must be cast to the type of the command being queried.

Table 2. vkGetDeviceProcAddr behavior
device pName return value

NULL

*

undefined

invalid device

*

undefined

device

NULL

undefined

device

core Vulkan command

fp1

device

enabled extension commands

fp1

device

* (any pName not covered above)

NULL

1

The returned function pointer must only be called with a dispatchable object (the first parameter) that is device or a child of device. e.g. VkDevice, VkQueue, or VkCommandBuffer.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pName must be a null-terminated string

The definition of PFN_vkVoidFunction is:

typedef void (VKAPI_PTR *PFN_vkVoidFunction)(void);

3.1.1. Extending Physical Device From Device Extensions

When the VK_KHR_get_physical_device_properties2 extension is enabled, physical device extension commands and structures can be used with a physical device if the corresponding extension is enumerated by vkEnumerateDeviceExtensionProperties for that physical device, even before a logical device has been created.

To obtain a function pointer for a physical-device command from a device extension, an application can use vkGetInstanceProcAddr. This function pointer may point to dispatch code, which calls a different real implementation for different VkPhysicalDevice objects. Behavior is undefined if an extension physical-device command is called on a physical device that does not support the extension.

Device extensions may define structures that can be added to the pNext chain of physical-device commands. Behavior is undefined if such an extension structure is passed to a physical device command for a physical device that does not support the extension.

3.2. Instances

There is no global state in Vulkan and all per-application state is stored in a VkInstance object. Creating a VkInstance object initializes the Vulkan library and allows the application to pass information about itself to the implementation.

Instances are represented by VkInstance handles:

VK_DEFINE_HANDLE(VkInstance)

To create an instance object, call:

VkResult vkCreateInstance(
    const VkInstanceCreateInfo*                 pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkInstance*                                 pInstance);
  • pCreateInfo points to an instance of VkInstanceCreateInfo controlling creation of the instance.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pInstance points a VkInstance handle in which the resulting instance is returned.

vkCreateInstance verifies that the requested layers exist. If not, vkCreateInstance will return VK_ERROR_LAYER_NOT_PRESENT. Next vkCreateInstance verifies that the requested extensions are supported (e.g. in the implementation or in any enabled instance layer) and if any requested extension is not supported, vkCreateInstance must return VK_ERROR_EXTENSION_NOT_PRESENT. After verifying and enabling the instance layers and extensions the VkInstance object is created and returned to the application. If a requested extension is only supported by a layer, both the layer and the extension need to be specified at vkCreateInstance time for the creation to succeed.

Valid Usage (Implicit)
  • pCreateInfo must be a pointer to a valid VkInstanceCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pInstance must be a pointer to a VkInstance handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_INITIALIZATION_FAILED

  • VK_ERROR_LAYER_NOT_PRESENT

  • VK_ERROR_EXTENSION_NOT_PRESENT

  • VK_ERROR_INCOMPATIBLE_DRIVER

The VkInstanceCreateInfo structure is defined as:

typedef struct VkInstanceCreateInfo {
    VkStructureType             sType;
    const void*                 pNext;
    VkInstanceCreateFlags       flags;
    const VkApplicationInfo*    pApplicationInfo;
    uint32_t                    enabledLayerCount;
    const char* const*          ppEnabledLayerNames;
    uint32_t                    enabledExtensionCount;
    const char* const*          ppEnabledExtensionNames;
} VkInstanceCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is reserved for future use.

  • pApplicationInfo is NULL or a pointer to an instance of VkApplicationInfo. If not NULL, this information helps implementations recognize behavior inherent to classes of applications. VkApplicationInfo is defined in detail below.

  • enabledLayerCount is the number of global layers to enable.

  • ppEnabledLayerNames is a pointer to an array of enabledLayerCount null-terminated UTF-8 strings containing the names of layers to enable for the created instance. See the Layers section for further details.

  • enabledExtensionCount is the number of global extensions to enable.

  • ppEnabledExtensionNames is a pointer to an array of enabledExtensionCount null-terminated UTF-8 strings containing the names of extensions to enable.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO

  • pNext must be NULL or a pointer to a valid instance of VkDebugReportCallbackCreateInfoEXT

  • flags must be 0

  • If pApplicationInfo is not NULL, pApplicationInfo must be a pointer to a valid VkApplicationInfo structure

  • If enabledLayerCount is not 0, ppEnabledLayerNames must be a pointer to an array of enabledLayerCount null-terminated strings

  • If enabledExtensionCount is not 0, ppEnabledExtensionNames must be a pointer to an array of enabledExtensionCount null-terminated strings

When creating a Vulkan instance for which you wish to disable validation checks, add a VkValidationFlagsEXT structure to the pNext chain of the VkInstanceCreateInfo structure, specifying the checks to be disabled.

typedef struct VkValidationFlagsEXT {
    VkStructureType          sType;
    const void*              pNext;
    uint32_t                 disabledValidationCheckCount;
    VkValidationCheckEXT*    pDisabledValidationChecks;
} VkValidationFlagsEXT;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • disabledValidationCheckCount is the number of checks to disable.

  • pDisabledValidationChecks is a pointer to an array of values specifying the validation checks to be disabled. Checks which may be specified include:

    typedef enum VkValidationCheckEXT {
        VK_VALIDATION_CHECK_ALL_EXT = 0,
        VK_VALIDATION_CHECK_SHADERS_EXT = 1,
    } VkValidationCheckEXT;
    • VK_VALIDATION_CHECK_ALL_EXT disables all validation checks.

    • VK_VALIDATION_CHECK_SHADERS_EXT disables all shader validation.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_VALIDATION_FLAGS_EXT

  • pNext must be NULL

  • pDisabledValidationChecks must be a pointer to an array of disabledValidationCheckCount VkValidationCheckEXT values

  • disabledValidationCheckCount must be greater than 0

The VkApplicationInfo structure is defined as:

typedef struct VkApplicationInfo {
    VkStructureType    sType;
    const void*        pNext;
    const char*        pApplicationName;
    uint32_t           applicationVersion;
    const char*        pEngineName;
    uint32_t           engineVersion;
    uint32_t           apiVersion;
} VkApplicationInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • pApplicationName is a pointer to a null-terminated UTF-8 string containing the name of the application.

  • applicationVersion is an unsigned integer variable containing the developer-supplied version number of the application.

  • pEngineName is a pointer to a null-terminated UTF-8 string containing the name of the engine (if any) used to create the application.

  • engineVersion is an unsigned integer variable containing the developer-supplied version number of the engine used to create the application.

  • apiVersion is the version of the Vulkan API against which the application expects to run, encoded as described in the API Version Numbers and Semantics section. If apiVersion is 0 the implementation must ignore it, otherwise if the implementation does not support the requested apiVersion, or an effective substitute for apiVersion, it must return VK_ERROR_INCOMPATIBLE_DRIVER. The patch version number specified in apiVersion is ignored when creating an instance object. Only the major and minor versions of the instance must match those requested in apiVersion.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_APPLICATION_INFO

  • pNext must be NULL

  • If pApplicationName is not NULL, pApplicationName must be a null-terminated string

  • If pEngineName is not NULL, pEngineName must be a null-terminated string

To destroy an instance, call:

void vkDestroyInstance(
    VkInstance                                  instance,
    const VkAllocationCallbacks*                pAllocator);
  • instance is the handle of the instance to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

Valid Usage
  • All child objects created using instance must have been destroyed prior to destroying instance

  • If VkAllocationCallbacks were provided when instance was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when instance was created, pAllocator must be NULL

Valid Usage (Implicit)
  • If instance is not NULL, instance must be a valid VkInstance handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

Host Synchronization
  • Host access to instance must be externally synchronized

4. Devices and Queues

Once Vulkan is initialized, devices and queues are the primary objects used to interact with a Vulkan implementation.

Vulkan separates the concept of physical and logical devices. A physical device usually represents a single device in a system (perhaps made up of several individual hardware devices working together), of which there are a finite number. A logical device represents an application’s view of the device.

Physical devices are represented by VkPhysicalDevice handles:

VK_DEFINE_HANDLE(VkPhysicalDevice)

4.1. Physical Devices

To retrieve a list of physical device objects representing the physical devices installed in the system, call:

VkResult vkEnumeratePhysicalDevices(
    VkInstance                                  instance,
    uint32_t*                                   pPhysicalDeviceCount,
    VkPhysicalDevice*                           pPhysicalDevices);
  • instance is a handle to a Vulkan instance previously created with vkCreateInstance.

  • pPhysicalDeviceCount is a pointer to an integer related to the number of physical devices available or queried, as described below.

  • pPhysicalDevices is either NULL or a pointer to an array of VkPhysicalDevice handles.

If pPhysicalDevices is NULL, then the number of physical devices available is returned in pPhysicalDeviceCount. Otherwise, pPhysicalDeviceCount must point to a variable set by the user to the number of elements in the pPhysicalDevices array, and on return the variable is overwritten with the number of handles actually written to pPhysicalDevices. If pPhysicalDeviceCount is less than the number of physical devices available, at most pPhysicalDeviceCount structures will be written. If pPhysicalDeviceCount is smaller than the number of physical devices available, VK_INCOMPLETE will be returned instead of VK_SUCCESS, to indicate that not all the available physical devices were returned.

Valid Usage (Implicit)
  • instance must be a valid VkInstance handle

  • pPhysicalDeviceCount must be a pointer to a uint32_t value

  • If the value referenced by pPhysicalDeviceCount is not 0, and pPhysicalDevices is not NULL, pPhysicalDevices must be a pointer to an array of pPhysicalDeviceCount VkPhysicalDevice handles

Return Codes
Success
  • VK_SUCCESS

  • VK_INCOMPLETE

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_INITIALIZATION_FAILED

To query general properties of physical devices once enumerated, call:

void vkGetPhysicalDeviceProperties(
    VkPhysicalDevice                            physicalDevice,
    VkPhysicalDeviceProperties*                 pProperties);
  • physicalDevice is the handle to the physical device whose properties will be queried.

  • pProperties points to an instance of the VkPhysicalDeviceProperties structure, that will be filled with returned information.

Valid Usage (Implicit)
  • physicalDevice must be a valid VkPhysicalDevice handle

  • pProperties must be a pointer to a VkPhysicalDeviceProperties structure

The VkPhysicalDeviceProperties structure is defined as:

typedef struct VkPhysicalDeviceProperties {
    uint32_t                            apiVersion;
    uint32_t                            driverVersion;
    uint32_t                            vendorID;
    uint32_t                            deviceID;
    VkPhysicalDeviceType                deviceType;
    char                                deviceName[VK_MAX_PHYSICAL_DEVICE_NAME_SIZE];
    uint8_t                             pipelineCacheUUID[VK_UUID_SIZE];
    VkPhysicalDeviceLimits              limits;
    VkPhysicalDeviceSparseProperties    sparseProperties;
} VkPhysicalDeviceProperties;
  • apiVersion is the version of Vulkan supported by the device, encoded as described in the API Version Numbers and Semantics section.

  • driverVersion is the vendor-specified version of the driver.

  • vendorID is a unique identifier for the vendor (see below) of the physical device.

  • deviceID is a unique identifier for the physical device among devices available from the vendor.

  • deviceType is a VkPhysicalDeviceType specifying the type of device.

  • deviceName is a null-terminated UTF-8 string containing the name of the device.

  • pipelineCacheUUID is an array of size VK_UUID_SIZE, containing 8-bit values that represent a universally unique identifier for the device.

  • limits is the VkPhysicalDeviceLimits structure which specifies device-specific limits of the physical device. See Limits for details.

  • sparseProperties is the VkPhysicalDeviceSparseProperties structure which specifies various sparse related properties of the physical device. See Sparse Properties for details.

The vendorID and deviceID fields are provided to allow applications to adapt to device characteristics that are not adequately exposed by other Vulkan queries. These may include performance profiles, hardware errata, or other characteristics. In PCI-based implementations, the low sixteen bits of vendorID and deviceID must contain (respectively) the PCI vendor and device IDs associated with the hardware device, and the remaining bits must be set to zero. In non-PCI implementations, the choice of what values to return may be dictated by operating system or platform policies. It is otherwise at the discretion of the implementer, subject to the following constraints and guidelines:

  • For purposes of physical device identification, the vendor of a physical device is the entity responsible for the most salient characteristics of the hardware represented by the physical device handle. In the case of a discrete GPU, this should be the GPU chipset vendor. In the case of a GPU or other accelerator integrated into a system-on-chip (SoC), this should be the supplier of the silicon IP used to create the GPU or other accelerator.

  • If the vendor of the physical device has a valid PCI vendor ID issued by PCI-SIG, that ID should be used to construct vendorID as described above for PCI-based implementations. Implementations that do not return a PCI vendor ID in vendorID must return a valid Khronos vendor ID, obtained as described in the Vulkan Documentation and Extensions document in the section “Registering a Vendor ID with Khronos”. Khronos vendor IDs are allocated starting at 0x10000, to distinguish them from the PCI vendor ID namespace.

  • The vendor of the physical device is responsible for selecting deviceID. The value selected should uniquely identify both the device version and any major configuration options (for example, core count in the case of multicore devices). The same device ID should be used for all physical implementations of that device version and configuration. For example, all uses of a specific silicon IP GPU version and configuration should use the same device ID, even if those uses occur in different SoCs.

The physical devices types are:

typedef enum VkPhysicalDeviceType {
    VK_PHYSICAL_DEVICE_TYPE_OTHER = 0,
    VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU = 1,
    VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU = 2,
    VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU = 3,
    VK_PHYSICAL_DEVICE_TYPE_CPU = 4,
} VkPhysicalDeviceType;
  • VK_PHYSICAL_DEVICE_TYPE_OTHER The device does not match any other available types.

  • VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU The device is typically one embedded in or tightly coupled with the host.

  • VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU The device is typically a separate processor connected to the host via an interlink.

  • VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU The device is typically a virtual node in a virtualization environment.

  • VK_PHYSICAL_DEVICE_TYPE_CPU The device is typically running on the same processors as the host.

The physical device type is advertised for informational purposes only, and does not directly affect the operation of the system. However, the device type may correlate with other advertised properties or capabilities of the system, such as how many memory heaps there are.

To query general properties of physical devices once enumerated, call:

void vkGetPhysicalDeviceProperties2KHR(
    VkPhysicalDevice                            physicalDevice,
    VkPhysicalDeviceProperties2KHR*             pProperties);
  • physicalDevice is the handle to the physical device whose properties will be queried.

  • pProperties points to an instance of the VkPhysicalDeviceProperties2KHR structure, that will be filled with returned information.

Each structure in pProperties and its pNext chain contain members corresponding to properties or implementation-dependent limits. vkGetPhysicalDeviceProperties2KHR writes each member to a value indicating the value of that property or limit.

Valid Usage (Implicit)
  • physicalDevice must be a valid VkPhysicalDevice handle

  • pProperties must be a pointer to a VkPhysicalDeviceProperties2KHR structure

The VkPhysicalDeviceProperties2KHR structure is defined as:

typedef struct VkPhysicalDeviceProperties2KHR {
    VkStructureType               sType;
    void*                         pNext;
    VkPhysicalDeviceProperties    properties;
} VkPhysicalDeviceProperties2KHR;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • properties is a structure of type VkPhysicalDeviceProperties describing the properties of the physical device. This structure is written with the same values as if it were written by vkGetPhysicalDeviceProperties.

The pNext chain of this structure is used to extend the structure with properties defined by extensions.

To query the UUID and LUID of a device, add VkPhysicalDeviceIDPropertiesKHX to the pNext chain of the VkPhysicalDeviceProperties2KHR structure. The VkPhysicalDeviceIDPropertiesKHX structure is defined as:

typedef struct VkPhysicalDeviceIDPropertiesKHX {
    VkStructureType    sType;
    void*              pNext;
    uint8_t            deviceUUID[VK_UUID_SIZE];
    uint8_t            driverUUID[VK_UUID_SIZE];
    uint8_t            deviceLUID[VK_LUID_SIZE_KHX];
    VkBool32           deviceLUIDValid;
} VkPhysicalDeviceIDPropertiesKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • deviceUUID is an array of size VK_UUID_SIZE, containing 8-bit values that represent a universally unique identifier for the device.

  • driverUUID is an array of size VK_UUID_SIZE, containing 8-bit values that represent a universally unique identifier for the driver build in use by the device.

  • deviceLUID is an array of size VK_LUID_SIZE_KHX, containing 8-bit values that represent a locally unique identifier for the device.

  • deviceLUIDValid is a boolean value that will be VK_TRUE if deviceLUID contains a valid LUID, and VK_FALSE if it does not.

deviceUUID must be immutable for a given device across instances, processes, driver APIs, and system reboots.

driverUUID must be identical in all driver components that support sharing external objects between eachother. Applications can compare this value across instance and process boundaries, and can make similar queries in external APIs to determine whether they are capable of sharing memory objects and resources using them with the device.

If deviceLUIDValid is VK_FALSE, the contents of deviceLUID are undefined. If deviceLUIDValid is VK_TRUE and Vulkan is running on the Windows operating system, the contents of deviceLUID can be cast to an LUID object and must be equal to the locally unique identifier of a IDXGIAdapter1 object that corresponds to physicalDevice.

Note

Although they have identical descriptions, VkPhysicalDeviceIDPropertiesKHX::deviceUUID may differ from VkPhysicalDeviceProperties2KHR::pipelineCacheUUID. The former is intended to identify and correlate devices across API and driver boundaries, while the latter is used to identify a compatible device and driver combination to use when serializing and de-serializing pipeline state.

To query properties of queues available on a physical device, call:

void vkGetPhysicalDeviceQueueFamilyProperties(
    VkPhysicalDevice                            physicalDevice,
    uint32_t*                                   pQueueFamilyPropertyCount,
    VkQueueFamilyProperties*                    pQueueFamilyProperties);
  • physicalDevice is the handle to the physical device whose properties will be queried.

  • pQueueFamilyPropertyCount is a pointer to an integer related to the number of queue families available or queried, as described below.

  • pQueueFamilyProperties is either NULL or a pointer to an array of VkQueueFamilyProperties structures.

If pQueueFamilyProperties is NULL, then the number of queue families available is returned in pQueueFamilyPropertyCount. Otherwise, pQueueFamilyPropertyCount must point to a variable set by the user to the number of elements in the pQueueFamilyProperties array, and on return the variable is overwritten with the number of structures actually written to pQueueFamilyProperties. If pQueueFamilyPropertyCount is less than the number of queue families available, at most pQueueFamilyPropertyCount structures will be written.

Valid Usage (Implicit)
  • physicalDevice must be a valid VkPhysicalDevice handle

  • pQueueFamilyPropertyCount must be a pointer to a uint32_t value

  • If the value referenced by pQueueFamilyPropertyCount is not 0, and pQueueFamilyProperties is not NULL, pQueueFamilyProperties must be a pointer to an array of pQueueFamilyPropertyCount VkQueueFamilyProperties structures

The VkQueueFamilyProperties structure is defined as:

typedef struct VkQueueFamilyProperties {
    VkQueueFlags    queueFlags;
    uint32_t        queueCount;
    uint32_t        timestampValidBits;
    VkExtent3D      minImageTransferGranularity;
} VkQueueFamilyProperties;
  • queueFlags contains flags indicating the capabilities of the queues in this queue family.

  • queueCount is the unsigned integer count of queues in this queue family.

  • timestampValidBits is the unsigned integer count of meaningful bits in the timestamps written via vkCmdWriteTimestamp. The valid range for the count is 36..64 bits, or a value of 0, indicating no support for timestamps. Bits outside the valid range are guaranteed to be zeros.

  • minImageTransferGranularity is the minimum granularity supported for image transfer operations on the queues in this queue family.

The bits specified in queueFlags are:

typedef enum VkQueueFlagBits {
    VK_QUEUE_GRAPHICS_BIT = 0x00000001,
    VK_QUEUE_COMPUTE_BIT = 0x00000002,
    VK_QUEUE_TRANSFER_BIT = 0x00000004,
    VK_QUEUE_SPARSE_BINDING_BIT = 0x00000008,
} VkQueueFlagBits;
  • if VK_QUEUE_GRAPHICS_BIT is set, then the queues in this queue family support graphics operations.

  • if VK_QUEUE_COMPUTE_BIT is set, then the queues in this queue family support compute operations.

  • if VK_QUEUE_TRANSFER_BIT is set, then the queues in this queue family support transfer operations.

  • if VK_QUEUE_SPARSE_BINDING_BIT is set, then the queues in this queue family support sparse memory management operations (see Sparse Resources). If any of the sparse resource features are enabled, then at least one queue family must support this bit.

If an implementation exposes any queue family that supports graphics operations, at least one queue family of at least one physical device exposed by the implementation must support both graphics and compute operations.

Note

All commands that are allowed on a queue that supports transfer operations are also allowed on a queue that supports either graphics or compute operations thus if the capabilities of a queue family include VK_QUEUE_GRAPHICS_BIT or VK_QUEUE_COMPUTE_BIT then reporting the VK_QUEUE_TRANSFER_BIT capability separately for that queue family is optional.

For further details see Queues.

The value returned in minImageTransferGranularity has a unit of compressed texel blocks for images having a block-compressed format, and a unit of texels otherwise.

Possible values of minImageTransferGranularity are:

  • (0,0,0) which indicates that only whole mip levels must be transferred using the image transfer operations on the corresponding queues. In this case, the following restrictions apply to all offset and extent parameters of image transfer operations:

    • The x, y, and z members of a VkOffset3D parameter must always be zero.

    • The width, height, and depth members of a VkExtent3D parameter must always match the width, height, and depth of the image subresource corresponding to the parameter, respectively.

  • (Ax, Ay, Az) where Ax, Ay, and Az are all integer powers of two. In this case the following restrictions apply to all image transfer operations:

    • x, y, and z of a VkOffset3D parameter must be integer multiples of Ax, Ay, and Az, respectively.

    • width of a VkExtent3D parameter must be an integer multiple of Ax, or else x + width must equal the width of the image subresource corresponding to the parameter.

    • height of a VkExtent3D parameter must be an integer multiple of Ay, or else y + height must equal the height of the image subresource corresponding to the parameter.

    • depth of a VkExtent3D parameter must be an integer multiple of Az, or else z + depth must equal the depth of the image subresource corresponding to the parameter.

    • If the format of the image corresponding to the parameters is one of the block-compressed formats then for the purposes of the above calculations the granularity must be scaled up by the compressed texel block dimensions.

Queues supporting graphics and/or compute operations must report (1,1,1) in minImageTransferGranularity, meaning that there are no additional restrictions on the granularity of image transfer operations for these queues. Other queues supporting image transfer operations are only required to support whole mip level transfers, thus minImageTransferGranularity for queues belonging to such queue families may be (0,0,0).

The Device Memory section describes memory properties queried from the physical device.

For physical device feature queries see the Features chapter.

To query properties of queues available on a physical device, call:

void vkGetPhysicalDeviceQueueFamilyProperties2KHR(
    VkPhysicalDevice                            physicalDevice,
    uint32_t*                                   pQueueFamilyPropertyCount,
    VkQueueFamilyProperties2KHR*                pQueueFamilyProperties);
  • physicalDevice is the handle to the physical device whose properties will be queried.

  • pQueueFamilyPropertyCount is a pointer to an integer related to the number of queue families available or queried, as described in vkGetPhysicalDeviceQueueFamilyProperties.

  • pQueueFamilyProperties is either NULL or a pointer to an array of VkQueueFamilyProperties2KHR structures.

vkGetPhysicalDeviceQueueFamilyProperties2KHR behaves similarly to vkGetPhysicalDeviceQueueFamilyProperties, with the ability to return extended information in a pNext chain of output structures.

Valid Usage (Implicit)
  • physicalDevice must be a valid VkPhysicalDevice handle

  • pQueueFamilyPropertyCount must be a pointer to a uint32_t value

  • If the value referenced by pQueueFamilyPropertyCount is not 0, and pQueueFamilyProperties is not NULL, pQueueFamilyProperties must be a pointer to an array of pQueueFamilyPropertyCount VkQueueFamilyProperties2KHR structures

The VkQueueFamilyProperties2KHR structure is defined as:

typedef struct VkQueueFamilyProperties2KHR {
    VkStructureType            sType;
    void*                      pNext;
    VkQueueFamilyProperties    queueFamilyProperties;
} VkQueueFamilyProperties2KHR;

4.2. Devices

Device objects represent logical connections to physical devices. Each device exposes a number of queue families each having one or more queues. All queues in a queue family support the same operations.

As described in Physical Devices, a Vulkan application will first query for all physical devices in a system. Each physical device can then be queried for its capabilities, including its queue and queue family properties. Once an acceptable physical device is identified, an application will create a corresponding logical device. An application must create a separate logical device for each physical device it will use. The created logical device is then the primary interface to the physical device.

How to enumerate the physical devices in a system and query those physical devices for their queue family properties is described in the Physical Device Enumeration section above.

A single logical device can also be created from multiple physical devices, if those physical devices belong to the same device group. A device group is a set of physical devices that support accessing each other’s memory and recording a single command buffer that can be executed on all the physical devices. Device groups are enumerated by calling vkEnumeratePhysicalDeviceGroupsKHX, and a logical device is created from a subset of the physical devices in a device group by passing the physical devices through VkDeviceGroupDeviceCreateInfoKHX.

To retrieve a list of the device groups present in the system, call:

VkResult vkEnumeratePhysicalDeviceGroupsKHX(
    VkInstance                                  instance,
    uint32_t*                                   pPhysicalDeviceGroupCount,
    VkPhysicalDeviceGroupPropertiesKHX*         pPhysicalDeviceGroupProperties);
  • instance is a handle to a Vulkan instance previously created with vkCreateInstance.

  • pPhysicalDeviceGroupCount is a pointer to an integer related to the number of device groups available or queried, as described below.

  • pPhysicalDeviceGroupProperties is either NULL or a pointer to an array of VkPhysicalDeviceGroupPropertiesKHX structures.

If pPhysicalDeviceGroupProperties is NULL, then the number of device groups available is returned in pPhysicalDeviceGroupCount. Otherwise, pPhysicalDeviceGroupCount must point to a variable set by the user to the number of elements in the pPhysicalDeviceGroupProperties array, and on return the variable is overwritten with the number of structures actually written to pPhysicalDeviceGroupProperties. If pPhysicalDeviceGroupCount is less than the number of device groups available, at most pPhysicalDeviceGroupCount structures will be written. If pPhysicalDeviceGroupCount is smaller than the number of device groups available, VK_INCOMPLETE will be returned instead of VK_SUCCESS, to indicate that not all the available device groups were returned.

Every physical device must be in exactly one device group.

Valid Usage (Implicit)
  • instance must be a valid VkInstance handle

  • pPhysicalDeviceGroupCount must be a pointer to a uint32_t value

  • If the value referenced by pPhysicalDeviceGroupCount is not 0, and pPhysicalDeviceGroupProperties is not NULL, pPhysicalDeviceGroupProperties must be a pointer to an array of pPhysicalDeviceGroupCount VkPhysicalDeviceGroupPropertiesKHX structures

Return Codes
Success
  • VK_SUCCESS

  • VK_INCOMPLETE

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_INITIALIZATION_FAILED

The VkPhysicalDeviceGroupPropertiesKHX structure is defined as:

typedef struct VkPhysicalDeviceGroupPropertiesKHX {
    VkStructureType     sType;
    void*               pNext;
    uint32_t            physicalDeviceCount;
    VkPhysicalDevice    physicalDevices[VK_MAX_DEVICE_GROUP_SIZE_KHX];
    VkBool32            subsetAllocation;
} VkPhysicalDeviceGroupPropertiesKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • physicalDeviceCount is the number of physical devices in the group.

  • physicalDevices is an array of physical device handles representing all physical devices in the group. The first physicalDeviceCount elements of the array will be valid.

  • subsetAllocation indicates whether logical devices created from the group support allocating device memory on a subset of devices, via the deviceMask member of the VkMemoryAllocateFlagsInfoKHX. If this is VK_FALSE, then all device memory allocations are made across all physical devices in the group. If physicalDeviceCount is 1, then subsetAllocation must be VK_FALSE.

4.2.1. Device Creation

Logical devices are represented by VkDevice handles:

VK_DEFINE_HANDLE(VkDevice)

A logical device is created as a connection to a physical device. To create a logical device, call:

VkResult vkCreateDevice(
    VkPhysicalDevice                            physicalDevice,
    const VkDeviceCreateInfo*                   pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkDevice*                                   pDevice);
  • physicalDevice must be one of the device handles returned from a call to vkEnumeratePhysicalDevices (see Physical Device Enumeration).

  • pCreateInfo is a pointer to a VkDeviceCreateInfo structure containing information about how to create the device.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pDevice points to a handle in which the created VkDevice is returned.

vkCreateDevice verifies that the requested extensions are supported (e.g. in the implementation or in any enabled instance layer) and if any requested extension is not supported, vkCreateDevice must return VK_ERROR_EXTENSION_NOT_PRESENT. After verifying and enabling the extensions the VkDevice object is created and returned to the application. If a requested extension is only supported by a layer, both the layer and the extension need to be specified at vkCreateInstance time for the creation to succeed.

Multiple logical devices can be created from the same physical device. Logical device creation may fail due to lack of device-specific resources (in addition to the other errors). If that occurs, vkCreateDevice will return VK_ERROR_TOO_MANY_OBJECTS.

Valid Usage (Implicit)
  • physicalDevice must be a valid VkPhysicalDevice handle

  • pCreateInfo must be a pointer to a valid VkDeviceCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pDevice must be a pointer to a VkDevice handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_INITIALIZATION_FAILED

  • VK_ERROR_EXTENSION_NOT_PRESENT

  • VK_ERROR_FEATURE_NOT_PRESENT

  • VK_ERROR_TOO_MANY_OBJECTS

  • VK_ERROR_DEVICE_LOST

The VkDeviceCreateInfo structure is defined as:

typedef struct VkDeviceCreateInfo {
    VkStructureType                    sType;
    const void*                        pNext;
    VkDeviceCreateFlags                flags;
    uint32_t                           queueCreateInfoCount;
    const VkDeviceQueueCreateInfo*     pQueueCreateInfos;
    uint32_t                           enabledLayerCount;
    const char* const*                 ppEnabledLayerNames;
    uint32_t                           enabledExtensionCount;
    const char* const*                 ppEnabledExtensionNames;
    const VkPhysicalDeviceFeatures*    pEnabledFeatures;
} VkDeviceCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is reserved for future use.

  • queueCreateInfoCount is the unsigned integer size of the pQueueCreateInfos array. Refer to the Queue Creation section below for further details.

  • pQueueCreateInfos is a pointer to an array of VkDeviceQueueCreateInfo structures describing the queues that are requested to be created along with the logical device. Refer to the Queue Creation section below for further details.

  • enabledLayerCount is deprecated and ignored.

  • ppEnabledLayerNames is deprecated and ignored. See Device Layer Deprecation.

  • enabledExtensionCount is the number of device extensions to enable.

  • ppEnabledExtensionNames is a pointer to an array of enabledExtensionCount null-terminated UTF-8 strings containing the names of extensions to enable for the created device. See the Extensions section for further details.

  • pEnabledFeatures is NULL or a pointer to a VkPhysicalDeviceFeatures structure that contains boolean indicators of all the features to be enabled. Refer to the Features section for further details.

Valid Usage
  • The queueFamilyIndex member of any given element of pQueueCreateInfos must be unique within pQueueCreateInfos

  • If the pNext chain includes a VkPhysicalDeviceFeatures2KHR structure, then pEnabledFeatures must be NULL

  • ppEnabledExtensionNames must not contain both VK_KHR_maintenance1 and VK_AMD_negative_viewport_height

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO

  • Each pNext member of any structure (including this one) in the pNext chain must be either NULL or a pointer to a valid instance of VkPhysicalDeviceFeatures2KHR, VkPhysicalDeviceMultiviewFeaturesKHX, or VkDeviceGroupDeviceCreateInfoKHX

  • Each sType member in the pNext chain must be unique

  • flags must be 0

  • pQueueCreateInfos must be a pointer to an array of queueCreateInfoCount valid VkDeviceQueueCreateInfo structures

  • If enabledLayerCount is not 0, ppEnabledLayerNames must be a pointer to an array of enabledLayerCount null-terminated strings

  • If enabledExtensionCount is not 0, ppEnabledExtensionNames must be a pointer to an array of enabledExtensionCount null-terminated strings

  • If pEnabledFeatures is not NULL, pEnabledFeatures must be a pointer to a valid VkPhysicalDeviceFeatures structure

  • queueCreateInfoCount must be greater than 0

A logical device can be created that connects to one or more physical devices by including a VkDeviceGroupDeviceCreateInfoKHX structure in the pNext chain of VkDeviceCreateInfo. The VkDeviceGroupDeviceCreateInfoKHX structure is defined as:

typedef struct VkDeviceGroupDeviceCreateInfoKHX {
    VkStructureType            sType;
    const void*                pNext;
    uint32_t                   physicalDeviceCount;
    const VkPhysicalDevice*    pPhysicalDevices;
} VkDeviceGroupDeviceCreateInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • physicalDeviceCount is the number of elements in the pPhysicalDevices array.

  • pPhysicalDevices is an array of physical device handles belonging to the same device group.

The elements of the pPhysicalDevices array are an ordered list of the physical devices that the logical device represents. These must be a subset of a single device group, and need not be in the same order as they were enumerated. The order of the physical devices in the pPhysicalDevices array determines the device index of each physical device, with element i being assigned a device index of i. Certain commands and structures refer to one or more physical devices by using device indices or device masks formed using device indices.

A logical device created without using VkDeviceGroupDeviceCreateInfoKHX, or with physicalDeviceCount equal to zero, is equivalent to a physicalDeviceCount of one and pPhysicalDevices pointing to the physicalDevice parameter to vkCreateDevice. In particular, the device index of that physical device is zero.

Valid Usage
  • Each element of pPhysicalDevices must be unique

  • All elements of pPhysicalDevices must be in the same device group as enumerated by vkEnumeratePhysicalDeviceGroupsKHX

  • If physicalDeviceCount is not 0, the physicalDevice parameter of vkCreateDevice must be an element of pPhysicalDevices.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_DEVICE_GROUP_DEVICE_CREATE_INFO_KHX

  • If physicalDeviceCount is not 0, pPhysicalDevices must be a pointer to an array of physicalDeviceCount valid VkPhysicalDevice handles

4.2.2. Device Use

The following is a high-level list of VkDevice uses along with references on where to find more information:

4.2.3. Lost Device

A logical device may become lost because of hardware errors, execution timeouts, power management events and/or platform-specific events. This may cause pending and future command execution to fail and cause hardware resources to be corrupted. When this happens, certain commands will return VK_ERROR_DEVICE_LOST (see Error Codes for a list of such commands). After any such event, the logical device is considered lost. It is not possible to reset the logical device to a non-lost state, however the lost state is specific to a logical device (VkDevice), and the corresponding physical device (VkPhysicalDevice) may be otherwise unaffected. In some cases, the physical device may also be lost, and attempting to create a new logical device will fail, returning VK_ERROR_DEVICE_LOST. This is usually indicative of a problem with the underlying hardware, or its connection to the host. If the physical device has not been lost, and a new logical device is successfully created from that physical device, it must be in the non-lost state.

Note

Whilst logical device loss may be recoverable, in the case of physical device loss, it is unlikely that an application will be able to recover unless additional, unaffected physical devices exist on the system. The error is largely informational and intended only to inform the user that their hardware has probably developed a fault or become physically disconnected, and should be investigated further. In many cases, physical device loss may cause other more serious issues such as the operating system crashing; in which case it may not be reported via the Vulkan API.

Note

Undefined behavior caused by an application error may cause a device to become lost. However, such undefined behavior may also cause unrecoverable damage to the process, and it is then not guaranteed that the API objects, including the VkPhysicalDevice or the VkInstance are still valid or that the error is recoverable.

When a device is lost, its child objects are not implicitly destroyed and their handles are still valid. Those objects must still be destroyed before their parents or the device can be destroyed (see the Object Lifetime section). The host address space corresponding to device memory mapped using vkMapMemory is still valid, and host memory accesses to these mapped regions are still valid, but the contents are undefined. It is still legal to call any API command on the device and child objects.

Once a device is lost, command execution may fail, and commands that return a VkResult may return VK_ERROR_DEVICE_LOST. Commands that do not allow run-time errors must still operate correctly for valid usage and, if applicable, return valid data.

Commands that wait indefinitely for device execution (namely vkDeviceWaitIdle, vkQueueWaitIdle, vkWaitForFences or vkAcquireNextImageKHR with a maximum timeout, and vkGetQueryPoolResults with the VK_QUERY_RESULT_WAIT_BIT bit set in flags) must return in finite time even in the case of a lost device, and return either VK_SUCCESS or VK_ERROR_DEVICE_LOST. For any command that may return VK_ERROR_DEVICE_LOST, for the purpose of determining whether a command buffer is in the pending state, or whether resources are considered in-use by the device, a return value of VK_ERROR_DEVICE_LOST is equivalent to VK_SUCCESS.

4.2.4. Device Destruction

To destroy a device, call:

void vkDestroyDevice(
    VkDevice                                    device,
    const VkAllocationCallbacks*                pAllocator);
  • device is the logical device to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

To ensure that no work is active on the device, vkDeviceWaitIdle can be used to gate the destruction of the device. Prior to destroying a device, an application is responsible for destroying/freeing any Vulkan objects that were created using that device as the first parameter of the corresponding vkCreate* or vkAllocate* command.

Note

The lifetime of each of these objects is bound by the lifetime of the VkDevice object. Therefore, to avoid resource leaks, it is critical that an application explicitly free all of these resources prior to calling vkDestroyDevice.

Valid Usage
  • All child objects created on device must have been destroyed prior to destroying device

  • If VkAllocationCallbacks were provided when device was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when device was created, pAllocator must be NULL

Valid Usage (Implicit)
  • If device is not NULL, device must be a valid VkDevice handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

Host Synchronization
  • Host access to device must be externally synchronized

4.3. Queues

4.3.1. Queue Family Properties

As discussed in the Physical Device Enumeration section above, the vkGetPhysicalDeviceQueueFamilyProperties command is used to retrieve details about the queue families and queues supported by a device.

Each index in the pQueueFamilyProperties array returned by vkGetPhysicalDeviceQueueFamilyProperties describes a unique queue family on that physical device. These indices are used when creating queues, and they correspond directly with the queueFamilyIndex that is passed to the vkCreateDevice command via the VkDeviceQueueCreateInfo structure as described in the Queue Creation section below.

Grouping of queue families within a physical device is implementation-dependent.

Note

The general expectation is that a physical device groups all queues of matching capabilities into a single family. However, while implementations should do this, it is possible that a physical device may return two separate queue families with the same capabilities.

Once an application has identified a physical device with the queue(s) that it desires to use, it will create those queues in conjunction with a logical device. This is described in the following section.

4.3.2. Queue Creation

Creating a logical device also creates the queues associated with that device. The queues to create are described by a set of VkDeviceQueueCreateInfo structures that are passed to vkCreateDevice in pQueueCreateInfos.

Queues are represented by VkQueue handles:

VK_DEFINE_HANDLE(VkQueue)

The VkDeviceQueueCreateInfo structure is defined as:

typedef struct VkDeviceQueueCreateInfo {
    VkStructureType             sType;
    const void*                 pNext;
    VkDeviceQueueCreateFlags    flags;
    uint32_t                    queueFamilyIndex;
    uint32_t                    queueCount;
    const float*                pQueuePriorities;
} VkDeviceQueueCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is reserved for future use.

  • queueFamilyIndex is an unsigned integer indicating the index of the queue family to create on this device. This index corresponds to the index of an element of the pQueueFamilyProperties array that was returned by vkGetPhysicalDeviceQueueFamilyProperties.

  • queueCount is an unsigned integer specifying the number of queues to create in the queue family indicated by queueFamilyIndex.

  • pQueuePriorities is an array of queueCount normalized floating point values, specifying priorities of work that will be submitted to each created queue. See Queue Priority for more information.

Valid Usage
  • queueFamilyIndex must be less than pQueueFamilyPropertyCount returned by vkGetPhysicalDeviceQueueFamilyProperties

  • queueCount must be less than or equal to the queueCount member of the VkQueueFamilyProperties structure, as returned by vkGetPhysicalDeviceQueueFamilyProperties in the pQueueFamilyProperties[queueFamilyIndex]

  • Each element of pQueuePriorities must be between 0.0 and 1.0 inclusive

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO

  • pNext must be NULL

  • flags must be 0

  • pQueuePriorities must be a pointer to an array of queueCount float values

  • queueCount must be greater than 0

To retrieve a handle to a VkQueue object, call:

void vkGetDeviceQueue(
    VkDevice                                    device,
    uint32_t                                    queueFamilyIndex,
    uint32_t                                    queueIndex,
    VkQueue*                                    pQueue);
  • device is the logical device that owns the queue.

  • queueFamilyIndex is the index of the queue family to which the queue belongs.

  • queueIndex is the index within this queue family of the queue to retrieve.

  • pQueue is a pointer to a VkQueue object that will be filled with the handle for the requested queue.

Valid Usage
  • queueFamilyIndex must be one of the queue family indices specified when device was created, via the VkDeviceQueueCreateInfo structure

  • queueIndex must be less than the number of queues created for the specified queue family index when device was created, via the queueCount member of the VkDeviceQueueCreateInfo structure

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pQueue must be a pointer to a VkQueue handle

4.3.3. Queue Family Index

The queue family index is used in multiple places in Vulkan in order to tie operations to a specific family of queues.

When retrieving a handle to the queue via vkGetDeviceQueue, the queue family index is used to select which queue family to retrieve the VkQueue handle from as described in the previous section.

When creating a VkCommandPool object (see Command Pools), a queue family index is specified in the VkCommandPoolCreateInfo structure. Command buffers from this pool can only be submitted on queues corresponding to this queue family.

When creating VkImage (see Images) and VkBuffer (see Buffers) resources, a set of queue families is included in the VkImageCreateInfo and VkBufferCreateInfo structures to specify the queue families that can access the resource.

When inserting a VkBufferMemoryBarrier or VkImageMemoryBarrier (see Events) a source and destination queue family index is specified to allow the ownership of a buffer or image to be transferred from one queue family to another. See the Resource Sharing section for details.

4.3.4. Queue Priority

Each queue is assigned a priority, as set in the VkDeviceQueueCreateInfo structures when creating the device. The priority of each queue is a normalized floating point value between 0.0 and 1.0, which is then translated to a discrete priority level by the implementation. Higher values indicate a higher priority, with 0.0 being the lowest priority and 1.0 being the highest.

Within the same device, queues with higher priority may be allotted more processing time than queues with lower priority. The implementation makes no guarantees with regards to ordering or scheduling among queues with the same priority, other than the constraints defined by any explicit synchronization primitives. The implementation make no guarantees with regards to queues across different devices.

An implementation may allow a higher-priority queue to starve a lower-priority queue on the same VkDevice until the higher-priority queue has no further commands to execute. The relationship of queue priorities must not cause queues on one VkDevice to starve queues on another VkDevice.

No specific guarantees are made about higher priority queues receiving more processing time or better quality of service than lower priority queues.

4.3.5. Queue Submission

Work is submitted to a queue via queue submission commands such as vkQueueSubmit. Queue submission commands define a set of queue operations to be executed by the underlying physical device, including synchronization with semaphores and fences.

Submission commands take as parameters a target queue, zero or more batches of work, and an optional fence to signal upon completion. Each batch consists of three distinct parts:

  1. Zero or more semaphores to wait on before execution of the rest of the batch.

  2. Zero or more work items to execute.

    • If present, these describe a queue operation matching the work described.

  3. Zero or more semaphores to signal upon completion of the work items.

If a fence is present in a queue submission, it describes a fence signal operation.

All work described by a queue submission command must be submitted to the queue before the command returns.

Sparse Memory Binding

In Vulkan it is possible to sparsely bind memory to buffers and images as described in the Sparse Resource chapter. Sparse memory binding is a queue operation. A queue whose flags include the VK_QUEUE_SPARSE_BINDING_BIT must be able to support the mapping of a virtual address to a physical address on the device. This causes an update to the page table mappings on the device. This update must be synchronized on a queue to avoid corrupting page table mappings during execution of graphics commands. By binding the sparse memory resources on queues, all commands that are dependent on the updated bindings are synchronized to only execute after the binding is updated. See the Synchronization and Cache Control chapter for how this synchronization is accomplished.

4.3.6. Queue Destruction

Queues are created along with a logical device during vkCreateDevice. All queues associated with a logical device are destroyed when vkDestroyDevice is called on that device.

5. Command Buffers

Command buffers are objects used to record commands which can be subsequently submitted to a device queue for execution. There are two levels of command buffers - primary command buffers, which can execute secondary command buffers, and which are submitted to queues, and secondary command buffers, which can be executed by primary command buffers, and which are not directly submitted to queues.

Command buffers are represented by VkCommandBuffer handles:

VK_DEFINE_HANDLE(VkCommandBuffer)

Recorded commands include commands to bind pipelines and descriptor sets to the command buffer, commands to modify dynamic state, commands to draw (for graphics rendering), commands to dispatch (for compute), commands to execute secondary command buffers (for primary command buffers only), commands to copy buffers and images, and other commands.

Each command buffer manages state independently of other command buffers. There is no inheritance of state across primary and secondary command buffers, or between secondary command buffers. When a command buffer begins recording, all state in that command buffer is undefined. When secondary command buffer(s) are recorded to execute on a primary command buffer, the secondary command buffer inherits no state from the primary command buffer, and all state of the primary command buffer is undefined after an execute secondary command buffer command is recorded. There is one exception to this rule - if the primary command buffer is inside a render pass instance, then the render pass and subpass state is not disturbed by executing secondary command buffers. Whenever the state of a command buffer is undefined, the application must set all relevant state on the command buffer before any state dependent commands such as draws and dispatches are recorded, otherwise the behavior of executing that command buffer is undefined.

Unless otherwise specified, and without explicit synchronization, the various commands submitted to a queue via command buffers may execute in arbitrary order relative to each other, and/or concurrently. Also, the memory side-effects of those commands may not be directly visible to other commands without explicit memory dependencies. This is true within a command buffer, and across command buffers submitted to a given queue. See the synchronization chapter for information on implicit and explicit synchronization between commands.

5.1. Command Buffer Lifecycle

Each command buffer is always in one of the following states:

Initial

When a command buffer is first allocated is in the initial state. Some commands are able to reset a command buffer, or a set of command buffers, back to this state from any of the executable, recording or invalid state. Command buffers in the initial state can only be moved to the recording state, or freed.

Recording

vkBeginCommandBuffer changes the state of a command buffer from the initial state to the recording state. Once a command buffer is in the recording state, vkCmd* commands can be used to record to the command buffer.

Executable

vkEndCommandBuffer ends the recording of a command buffer, and moves it from the recording state to the executable state. Executable command buffers can be submitted, reset, or recorded to another command buffer.

Pending

Queue submission of a command buffer changes the state of a command buffer from the executable state to the pending state. Whilst in the pending state, applications must not attempt to modify the command buffer in any way - the device may be processing the commands recorded to it. Once execution of a command buffer completes, the command buffer reverts back to the executable state. A synchronization command should be used to detect when this occurs.

Invalid

Some operations, such as modifying or deleting a resource that was used in a command recorded to a command buffer, will transition the state of a command buffer into the invalid state. Command buffers in the invalid state can only be reset, moved to the recording state, or freed.

Any given command that operates on a command buffer has its own requirements on what state a command buffer must be in, which are detailed in the valid usage constraints for that command.

Resetting a command buffer is an operation that discards any previously recorded commands and puts a command buffer in the initial state. Resetting occurs as a result of vkResetCommandBuffer or vkResetCommandPool, or as part of vkBeginCommandBuffer (which additionally puts the command buffer in the recording state).

Secondary command buffers can be recorded to a primary command buffer via vkCmdExecuteCommands. This partially ties the lifecycle of the two command buffers together - if the primary is submitted to a queue, both the primary and any secondaries recorded to it move to the pending state. Once execution of the primary completes, so does any secondary recorded within it, and once all executions of each command buffer complete, they move to the executable state. If a secondary moves to any other state whilst it is recorded to another command buffer, the primary moves to the invalid state. A primary moving to any other state does not affect the state of the secondary. Resetting or freeing a primary command buffer removes the linkage to any secondary command buffers that were recorded to it.

5.2. Command Pools

Command pools are opaque objects that command buffer memory is allocated from, and which allow the implementation to amortize the cost of resource creation across multiple command buffers. Command pools are externally synchronized, meaning that a command pool must not be used concurrently in multiple threads. That includes use via recording commands on any command buffers allocated from the pool, as well as operations that allocate, free, and reset command buffers or the pool itself.

Command pools are represented by VkCommandPool handles:

VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkCommandPool)

To create a command pool, call:

VkResult vkCreateCommandPool(
    VkDevice                                    device,
    const VkCommandPoolCreateInfo*              pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkCommandPool*                              pCommandPool);
  • device is the logical device that creates the command pool.

  • pCreateInfo contains information used to create the command pool.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pCommandPool points to a VkCommandPool handle in which the created pool is returned.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pCreateInfo must be a pointer to a valid VkCommandPoolCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pCommandPool must be a pointer to a VkCommandPool handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The VkCommandPoolCreateInfo structure is defined as:

typedef struct VkCommandPoolCreateInfo {
    VkStructureType             sType;
    const void*                 pNext;
    VkCommandPoolCreateFlags    flags;
    uint32_t                    queueFamilyIndex;
} VkCommandPoolCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is a bitmask indicating usage behavior for the pool and command buffers allocated from it. Bits which can be set include:

    typedef enum VkCommandPoolCreateFlagBits {
        VK_COMMAND_POOL_CREATE_TRANSIENT_BIT = 0x00000001,
        VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT = 0x00000002,
    } VkCommandPoolCreateFlagBits;
    • VK_COMMAND_POOL_CREATE_TRANSIENT_BIT indicates that command buffers allocated from the pool will be short-lived, meaning that they will be reset or freed in a relatively short timeframe. This flag may be used by the implementation to control memory allocation behavior within the pool.

    • VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT allows any command buffer allocated from a pool to be individually reset to the initial state; either by calling vkResetCommandBuffer, or via the implicit reset when calling vkBeginCommandBuffer. If this flag is not set on a pool, then vkResetCommandBuffer must not be called for any command buffer allocated from that pool.

  • queueFamilyIndex designates a queue family as described in section Queue Family Properties. All command buffers allocated from this command pool must be submitted on queues from the same queue family.

Valid Usage
  • queueFamilyIndex must be the index of a queue family available in the calling command’s device parameter

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO

  • pNext must be NULL

  • flags must be a valid combination of VkCommandPoolCreateFlagBits values

To trim a command pool, call:

void vkTrimCommandPoolKHR(
    VkDevice                                    device,
    VkCommandPool                               commandPool,
    VkCommandPoolTrimFlagsKHR                   flags);
  • device is the logical device that owns the command pool.

  • commandPool is the command pool to trim.

  • flags is reserved for future use.

Trimming a command pool recycles unused memory from the command pool back to the system. Command buffers allocated from the pool are not affected by the command.

Note

This command provides applications with some control over the internal memory allocations used by command pools.

Unused memory normally arises from command buffers that have been recorded and later reset, such that they are no longer using the memory. On reset, a command buffer can return memory to its command pool, but the only way to release memory from a command pool to the system requires calling vkResetCommandPool, which cannot be executed while any command buffers from that pool are still in use. Subsequent recording operations into command buffers will re-use this memory but since total memory requirements fluctuate over time, unused memory can accumulate.

In this situation, trimming a command pool may be useful to return unused memory back to the system, returning the total outstanding memory allocated by the pool back to a more "average" value.

Implementations utilize many internal allocation strategies that make it impossible to guarantee that all unused memory is released back to the system. For instance, an implementation of a command pool may involve allocating memory in bulk from the system and sub-allocating from that memory. In such an implementation any live command buffer that holds a reference to a bulk allocation would prevent that allocation from being freed, even if only a small proportion of the bulk allocation is in use.

In most cases trimming will result in a reduction in allocated but unused memory, but it does not guarantee the "ideal" behaviour.

Trimming may be an expensive operation, and should not be called frequently. Trimming should be treated as a way to relieve memory pressure after application-known points when there exists enough unused memory that the cost of trimming is "worth" it.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • commandPool must be a valid VkCommandPool handle

  • flags must be 0

  • commandPool must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to commandPool must be externally synchronized

To reset a command pool, call:

VkResult vkResetCommandPool(
    VkDevice                                    device,
    VkCommandPool                               commandPool,
    VkCommandPoolResetFlags                     flags);
  • device is the logical device that owns the command pool.

  • commandPool is the command pool to reset.

  • flags contains additional flags controlling the behavior of the reset. Bits which can be set include:

    typedef enum VkCommandPoolResetFlagBits {
        VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT = 0x00000001,
    } VkCommandPoolResetFlagBits;

    If flags includes VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT, resetting a command pool recycles all of the resources from the command pool back to the system.

Resetting a command pool recycles all of the resources from all of the command buffers allocated from the command pool back to the command pool. All command buffers that have been allocated from the command pool are put in the initial state.

Any primary command buffer allocated from another VkCommandPool that is in the recording or executable state and has a secondary command buffer allocated from commandPool recorded into it, becomes invalid.

Valid Usage
  • All VkCommandBuffer objects allocated from commandPool must not be in the pending state

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • commandPool must be a valid VkCommandPool handle

  • flags must be a valid combination of VkCommandPoolResetFlagBits values

  • commandPool must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to commandPool must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

To destroy a command pool, call:

void vkDestroyCommandPool(
    VkDevice                                    device,
    VkCommandPool                               commandPool,
    const VkAllocationCallbacks*                pAllocator);
  • device is the logical device that destroys the command pool.

  • commandPool is the handle of the command pool to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

When a pool is destroyed, all command buffers allocated from the pool are freed.

Any primary command buffer allocated from another VkCommandPool that is in the recording or executable state and has a secondary command buffer allocated from commandPool recorded into it, becomes invalid.

Valid Usage
  • All VkCommandBuffer objects allocated from commandPool must not be in the pending state.

  • If VkAllocationCallbacks were provided when commandPool was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when commandPool was created, pAllocator must be NULL

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • If commandPool is not VK_NULL_HANDLE, commandPool must be a valid VkCommandPool handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • If commandPool is a valid handle, it must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to commandPool must be externally synchronized

5.3. Command Buffer Allocation and Management

To allocate command buffers, call:

VkResult vkAllocateCommandBuffers(
    VkDevice                                    device,
    const VkCommandBufferAllocateInfo*          pAllocateInfo,
    VkCommandBuffer*                            pCommandBuffers);
  • device is the logical device that owns the command pool.

  • pAllocateInfo is a pointer to an instance of the VkCommandBufferAllocateInfo structure describing parameters of the allocation.

  • pCommandBuffers is a pointer to an array of VkCommandBuffer handles in which the resulting command buffer objects are returned. The array must be at least the length specified by the commandBufferCount member of pAllocateInfo. Each allocated command buffer begins in the initial state.

vkAllocateCommandBuffers can be used to create multiple command buffers. If the creation of any of those command buffers fails, the implementation must destroy all successfully created command buffer objects from this command, set all entries of the pCommandBuffers array to NULL and return the error.

When command buffers are first allocated, they are in the initial state.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pAllocateInfo must be a pointer to a valid VkCommandBufferAllocateInfo structure

  • pCommandBuffers must be a pointer to an array of pAllocateInfo::commandBufferCount VkCommandBuffer handles

Host Synchronization
  • Host access to pAllocateInfo::commandPool must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The VkCommandBufferAllocateInfo structure is defined as:

typedef struct VkCommandBufferAllocateInfo {
    VkStructureType         sType;
    const void*             pNext;
    VkCommandPool           commandPool;
    VkCommandBufferLevel    level;
    uint32_t                commandBufferCount;
} VkCommandBufferAllocateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • commandPool is the command pool from which the command buffers are allocated.

  • level determines whether the command buffers are primary or secondary command buffers. Possible values include:

    typedef enum VkCommandBufferLevel {
        VK_COMMAND_BUFFER_LEVEL_PRIMARY = 0,
        VK_COMMAND_BUFFER_LEVEL_SECONDARY = 1,
    } VkCommandBufferLevel;
  • commandBufferCount is the number of command buffers to allocate from the pool.

Valid Usage
  • commandBufferCount must be greater than 0

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO

  • pNext must be NULL

  • commandPool must be a valid VkCommandPool handle

  • level must be a valid VkCommandBufferLevel value

To reset command buffers, call:

VkResult vkResetCommandBuffer(
    VkCommandBuffer                             commandBuffer,
    VkCommandBufferResetFlags                   flags);
  • commandBuffer is the command buffer to reset. The command buffer can be in any state other than pending, and is moved into the initial state.

  • flags is a bitmask controlling the reset operation. Bits which can be set include:

    typedef enum VkCommandBufferResetFlagBits {
        VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT = 0x00000001,
    } VkCommandBufferResetFlagBits;

    If flags includes VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT, then most or all memory resources currently owned by the command buffer should be returned to the parent command pool. If this flag is not set, then the command buffer may hold onto memory resources and reuse them when recording commands. commandBuffer is moved to the initial state.

Any primary command buffer that is in the recording or executable state and has commandBuffer recorded into it, becomes invalid.

Valid Usage
  • commandBuffer must not be in the pending state

  • commandBuffer must have been allocated from a pool that was created with the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT

Valid Usage (Implicit)
Host Synchronization
  • Host access to commandBuffer must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

To free command buffers, call:

void vkFreeCommandBuffers(
    VkDevice                                    device,
    VkCommandPool                               commandPool,
    uint32_t                                    commandBufferCount,
    const VkCommandBuffer*                      pCommandBuffers);
  • device is the logical device that owns the command pool.

  • commandPool is the command pool from which the command buffers were allocated.

  • commandBufferCount is the length of the pCommandBuffers array.

  • pCommandBuffers is an array of handles of command buffers to free.

Any primary command buffer that is in the recording or executable state and has any element of pCommandBuffers recorded into it, becomes invalid.

Valid Usage
  • All elements of pCommandBuffers must not be in the pending state

  • pCommandBuffers must be a pointer to an array of commandBufferCount VkCommandBuffer handles, each element of which must either be a valid handle or NULL

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • commandPool must be a valid VkCommandPool handle

  • commandBufferCount must be greater than 0

  • commandPool must have been created, allocated, or retrieved from device

  • Each element of pCommandBuffers that is a valid handle must have been created, allocated, or retrieved from commandPool

Host Synchronization
  • Host access to commandPool must be externally synchronized

  • Host access to each member of pCommandBuffers must be externally synchronized

5.4. Command Buffer Recording

To begin recording a command buffer, call:

VkResult vkBeginCommandBuffer(
    VkCommandBuffer                             commandBuffer,
    const VkCommandBufferBeginInfo*             pBeginInfo);
  • commandBuffer is the handle of the command buffer which is to be put in the recording state.

  • pBeginInfo is an instance of the VkCommandBufferBeginInfo structure, which defines additional information about how the command buffer begins recording.

Valid Usage
  • commandBuffer must not be in the recording or pending state.

  • If commandBuffer was allocated from a VkCommandPool which did not have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT flag set, commandBuffer must be in the initial state.

  • If commandBuffer is a secondary command buffer, the pInheritanceInfo member of pBeginInfo must be a valid VkCommandBufferInheritanceInfo structure

  • If commandBuffer is a secondary command buffer and either the occlusionQueryEnable member of the pInheritanceInfo member of pBeginInfo is VK_FALSE, or the precise occlusion queries feature is not enabled, the queryFlags member of the pInheritanceInfo member pBeginInfo must not contain VK_QUERY_CONTROL_PRECISE_BIT

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • pBeginInfo must be a pointer to a valid VkCommandBufferBeginInfo structure

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The VkCommandBufferBeginInfo structure is defined as:

typedef struct VkCommandBufferBeginInfo {
    VkStructureType                          sType;
    const void*                              pNext;
    VkCommandBufferUsageFlags                flags;
    const VkCommandBufferInheritanceInfo*    pInheritanceInfo;
} VkCommandBufferBeginInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is a bitmask indicating usage behavior for the command buffer. Bits which can be set include:

    typedef enum VkCommandBufferUsageFlagBits {
        VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT = 0x00000001,
        VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT = 0x00000002,
        VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT = 0x00000004,
    } VkCommandBufferUsageFlagBits;
    • VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT indicates that each recording of the command buffer will only be submitted once, and the command buffer will be reset and recorded again between each submission.

    • VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT indicates that a secondary command buffer is considered to be entirely inside a render pass. If this is a primary command buffer, then this bit is ignored.

    • Setting VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT allows the command buffer to be resubmitted to a queue while it is in the pending state, and recorded into multiple primary command buffers.

  • pInheritanceInfo is a pointer to a VkCommandBufferInheritanceInfo structure, which is used if commandBuffer is a secondary command buffer. If this is a primary command buffer, then this value is ignored.

Valid Usage
  • If flags contains VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT, the renderPass member of pInheritanceInfo must be a valid VkRenderPass

  • If flags contains VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT, the subpass member of pInheritanceInfo must be a valid subpass index within the renderPass member of pInheritanceInfo

  • If flags contains VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT, the framebuffer member of pInheritanceInfo must be either VK_NULL_HANDLE, or a valid VkFramebuffer that is compatible with the renderPass member of pInheritanceInfo

Valid Usage (Implicit)

If the command buffer is a secondary command buffer, then the VkCommandBufferInheritanceInfo structure defines any state that will be inherited from the primary command buffer:

typedef struct VkCommandBufferInheritanceInfo {
    VkStructureType                  sType;
    const void*                      pNext;
    VkRenderPass                     renderPass;
    uint32_t                         subpass;
    VkFramebuffer                    framebuffer;
    VkBool32                         occlusionQueryEnable;
    VkQueryControlFlags              queryFlags;
    VkQueryPipelineStatisticFlags    pipelineStatistics;
} VkCommandBufferInheritanceInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • renderPass is a VkRenderPass object defining which render passes the VkCommandBuffer will be compatible with and can be executed within. If the VkCommandBuffer will not be executed within a render pass instance, renderPass is ignored.

  • subpass is the index of the subpass within the render pass instance that the VkCommandBuffer will be executed within. If the VkCommandBuffer will not be executed within a render pass instance, subpass is ignored.

  • framebuffer optionally refers to the VkFramebuffer object that the VkCommandBuffer will be rendering to if it is executed within a render pass instance. It can be VK_NULL_HANDLE if the framebuffer is not known, or if the VkCommandBuffer will not be executed within a render pass instance.

    Note

    Specifying the exact framebuffer that the secondary command buffer will be executed with may result in better performance at command buffer execution time.

  • occlusionQueryEnable indicates whether the command buffer can be executed while an occlusion query is active in the primary command buffer. If this is VK_TRUE, then this command buffer can be executed whether the primary command buffer has an occlusion query active or not. If this is VK_FALSE, then the primary command buffer must not have an occlusion query active.

  • queryFlags indicates the query flags that can be used by an active occlusion query in the primary command buffer when this secondary command buffer is executed. If this value includes the VK_QUERY_CONTROL_PRECISE_BIT bit, then the active query can return boolean results or actual sample counts. If this bit is not set, then the active query must not use the VK_QUERY_CONTROL_PRECISE_BIT bit.

  • pipelineStatistics indicates the set of pipeline statistics that can be counted by an active query in the primary command buffer when this secondary command buffer is executed. If this value includes a given bit, then this command buffer can be executed whether the primary command buffer has a pipeline statistics query active that includes this bit or not. If this value excludes a given bit, then the active pipeline statistics query must not be from a query pool that counts that statistic.

Valid Usage
Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO

  • pNext must be NULL

  • Both of framebuffer, and renderPass that are valid handles must have been created, allocated, or retrieved from the same VkDevice

If VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT was not set when creating a command buffer, that command buffer must not be submitted to a queue whilst it is already in the pending state. If VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT is not set on a secondary command buffer, that command buffer must not be used more than once in a given primary command buffer.

Note

On some implementations, not using the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT bit enables command buffers to be patched in-place if needed, rather than creating a copy of the command buffer.

If a command buffer is in the invalid, or executable state, and the command buffer was allocated from a command pool with the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT flag set, then vkBeginCommandBuffer implicitly resets the command buffer, behaving as if vkResetCommandBuffer had been called with VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT not set. After the implicit reset, commandBuffer is moved to the recording state.

Once recording starts, an application records a sequence of commands (vkCmd*) to set state in the command buffer, draw, dispatch, and other commands.

Several commands can also be recorded indirectly from VkBuffer content, see Device-Generated Commands.

To complete recording of a command buffer, call:

VkResult vkEndCommandBuffer(
    VkCommandBuffer                             commandBuffer);
  • commandBuffer is the command buffer to complete recording.

If there was an error during recording, the application will be notified by an unsuccessful return code returned by vkEndCommandBuffer. If the application wishes to further use the command buffer, the command buffer must be reset. The command buffer must have been in the recording state, and is moved to the executable state.

Valid Usage
  • commandBuffer must be in the recording state.

  • If commandBuffer is a primary command buffer, there must not be an active render pass instance

  • All queries made active during the recording of commandBuffer must have been made inactive

  • If commandBuffer is a secondary command buffer, there must not be an outstanding vkCmdDebugMarkerBeginEXT command recorded to commandBuffer that has not previously been ended by a call to vkCmdDebugMarkerEndEXT.

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

When a command buffer is in the executable state, it can be submitted to a queue for execution.

5.5. Command Buffer Submission

To submit command buffers to a queue, call:

VkResult vkQueueSubmit(
    VkQueue                                     queue,
    uint32_t                                    submitCount,
    const VkSubmitInfo*                         pSubmits,
    VkFence                                     fence);
  • queue is the queue that the command buffers will be submitted to.

  • submitCount is the number of elements in the pSubmits array.

  • pSubmits is a pointer to an array of VkSubmitInfo structures, each specifying a command buffer submission batch.

  • fence is an optional handle to a fence to be signaled. If fence is not VK_NULL_HANDLE, it defines a fence signal operation.

Note

Submission can be a high overhead operation, and applications should attempt to batch work together into as few calls to vkQueueSubmit as possible.

vkQueueSubmit is a queue submission command, with each batch defined by an element of pSubmits as an instance of the VkSubmitInfo structure. Batches begin execution in the order they appear in pSubmits, but may complete out of order.

Fence and semaphore operations submitted with vkQueueSubmit have additional ordering constraints compared to other submission commands, with dependencies involving previous and subsequent queue operations. Information about these additional constraints can be found in the semaphore and fence sections of the synchronization chapter.

Details on the interaction of pWaitDstStageMask with synchronization are described in the semaphore wait operation section of the synchronization chapter.

The order that batches appear in pSubmits is used to determine submission order, and thus all the implicit ordering guarantees that respect it. Other than these implicit ordering guarantees and any explicit synchronization primitives, these batches may overlap or otherwise execute out of order.

If any command buffer submitted to this queue is in the executable state, it is moved to the pending state. Once execution of all submissions of a command buffer complete, it moves from the pending state, back to the executable state. If a command buffer was recorded with the VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT flag, it instead moves back to the invalid state.

If vkQueueSubmit fails, it may return VK_ERROR_OUT_OF_HOST_MEMORY or VK_ERROR_OUT_OF_DEVICE_MEMORY. If it does, the implementation must ensure that the state and contents of any resources or synchronization primitives referenced by the submitted command buffers and any semaphores referenced by pSubmits is unaffected by the call or its failure. If vkQueueSubmit fails in such a way that the implementation can not make that guarantee, the implementation must return VK_ERROR_DEVICE_LOST. See Lost Device.

Valid Usage
  • If fence is not VK_NULL_HANDLE, fence must be unsignaled

  • If fence is not VK_NULL_HANDLE, fence must not be associated with any other queue command that has not yet completed execution on that queue

  • Any calls to vkCmdSetEvent, vkCmdResetEvent or vkCmdWaitEvents that have been recorded into any of the command buffer elements of the pCommandBuffers member of any element of pSubmits, must not reference any VkEvent that is referenced by any of those commands in a command buffer that has been submitted to another queue and is still in the pending state.

  • Any stage flag included in any element of the pWaitDstStageMask member of any element of pSubmits must be a pipeline stage supported by one of the capabilities of queue, as specified in the table of supported pipeline stages.

  • Any given element of the pSignalSemaphores member of any element of pSubmits must be unsignaled when the semaphore signal operation it defines is executed on the device

  • When a semaphore unsignal operation defined by any element of the pWaitSemaphores member of any element of pSubmits executes on queue, no other queue must be waiting on the same semaphore.

  • All elements of the pWaitSemaphores member of all elements of pSubmits must be semaphores that are signaled, or have semaphore signal operations previously submitted for execution.

  • Any given element of the pCommandBuffers member of any element of pSubmits must be in the pending or executable state.

  • If any given element of the pCommandBuffers member of any element of pSubmits was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT, it must not be in the pending state.

  • Any secondary command buffers recorded into any given element of the pCommandBuffers member of any element of pSubmits must be in the pending or executable state.

  • If any secondary command buffers recorded into any given element of the pCommandBuffers member of any element of pSubmits was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT, it must not be in the pending state.

  • Any given element of the pCommandBuffers member of any element of pSubmits must have been allocated from a VkCommandPool that was created for the same queue family queue belongs to.

Valid Usage (Implicit)
  • queue must be a valid VkQueue handle

  • If submitCount is not 0, pSubmits must be a pointer to an array of submitCount valid VkSubmitInfo structures

  • If fence is not VK_NULL_HANDLE, fence must be a valid VkFence handle

  • Both of fence, and queue that are valid handles must have been created, allocated, or retrieved from the same VkDevice

Host Synchronization
  • Host access to queue must be externally synchronized

  • Host access to pSubmits[].pWaitSemaphores[] must be externally synchronized

  • Host access to pSubmits[].pSignalSemaphores[] must be externally synchronized

  • Host access to fence must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_DEVICE_LOST

The VkSubmitInfo structure is defined as:

typedef struct VkSubmitInfo {
    VkStructureType                sType;
    const void*                    pNext;
    uint32_t                       waitSemaphoreCount;
    const VkSemaphore*             pWaitSemaphores;
    const VkPipelineStageFlags*    pWaitDstStageMask;
    uint32_t                       commandBufferCount;
    const VkCommandBuffer*         pCommandBuffers;
    uint32_t                       signalSemaphoreCount;
    const VkSemaphore*             pSignalSemaphores;
} VkSubmitInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • waitSemaphoreCount is the number of semaphores upon which to wait before executing the command buffers for the batch.

  • pWaitSemaphores is a pointer to an array of semaphores upon which to wait before the command buffers for this batch begin execution. If semaphores to wait on are provided, they define a semaphore wait operation.

  • pWaitDstStageMask is a pointer to an array of pipeline stages at which each corresponding semaphore wait will occur.

  • commandBufferCount is the number of command buffers to execute in the batch.

  • pCommandBuffers is a pointer to an array of command buffers to execute in the batch.

  • signalSemaphoreCount is the number of semaphores to be signaled once the commands specified in pCommandBuffers have completed execution.

  • pSignalSemaphores is a pointer to an array of semaphores which will be signaled when the command buffers for this batch have completed execution. If semaphores to be signaled are provided, they define a semaphore signal operation.

The order that command buffers appear in pCommandBuffers is used to determine submission order, and thus all the implicit ordering guarantees that respect it. Other than these implicit ordering guarantees and any explicit synchronization primitives, these command buffers may overlap or otherwise execute out of order.

Valid Usage
  • Any given element of pCommandBuffers must not have been allocated with VK_COMMAND_BUFFER_LEVEL_SECONDARY

  • If the geometry shaders feature is not enabled, any given element of pWaitDstStageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the tessellation shaders feature is not enabled, any given element of pWaitDstStageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • Any given element of pWaitDstStageMask must not include VK_PIPELINE_STAGE_HOST_BIT.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_SUBMIT_INFO

  • Each pNext member of any structure (including this one) in the pNext chain must be either NULL or a pointer to a valid instance of VkWin32KeyedMutexAcquireReleaseInfoNV, VkWin32KeyedMutexAcquireReleaseInfoKHX, VkD3D12FenceSubmitInfoKHX, or VkDeviceGroupSubmitInfoKHX

  • Each sType member in the pNext chain must be unique

  • If waitSemaphoreCount is not 0, pWaitSemaphores must be a pointer to an array of waitSemaphoreCount valid VkSemaphore handles

  • If waitSemaphoreCount is not 0, pWaitDstStageMask must be a pointer to an array of waitSemaphoreCount valid combinations of VkPipelineStageFlagBits values

  • Each element of pWaitDstStageMask must not be 0

  • If commandBufferCount is not 0, pCommandBuffers must be a pointer to an array of commandBufferCount valid VkCommandBuffer handles

  • If signalSemaphoreCount is not 0, pSignalSemaphores must be a pointer to an array of signalSemaphoreCount valid VkSemaphore handles

  • Each of the elements of pCommandBuffers, the elements of pSignalSemaphores, and the elements of pWaitSemaphores that are valid handles must have been created, allocated, or retrieved from the same VkDevice

To specify the values to use when waiting for and signaling semaphores whose current state is shared with a Direct3D 12 fence, add the VkD3D12FenceSubmitInfoKHX structure to the pNext chain of the VkSubmitInfo structure. The VkD3D12FenceSubmitInfoKHX structure is defined as:

typedef struct VkD3D12FenceSubmitInfoKHX {
    VkStructureType    sType;
    const void*        pNext;
    uint32_t           waitSemaphoreValuesCount;
    const uint64_t*    pWaitSemaphoreValues;
    uint32_t           signalSemaphoreValuesCount;
    const uint64_t*    pSignalSemaphoreValues;
} VkD3D12FenceSubmitInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • waitSemaphoreValuesCount is the number of semaphore wait values specified in pWaitSemaphoreValues.

  • pWaitSemaphoreValues is an array of length waitSemaphoreValuesCount containing values for the corresponding semaphores in VkSubmitInfo::pWaitSemaphores to wait for.

  • signalSemaphoreValuesCount is the number of semaphore signal values specified in pSignalSemaphoreValues.

  • pSignalSemaphoreValues is an array of length signalSemaphoreValuesCount containing values for the corresponding semaphores in VkSubmitInfo::pSignalSemaphores to set when signaled.

If the semaphore in VkSubmitInfo::pWaitSemaphores or VkSubmitInfo::pSignalSemaphore corresponding to an entry in pWaitSemaphoreValues or pSignalSemaphoreValues respectively does not currently have state imported from a Direct3D 12 fence, the value in the pWaitSemaphoreValues or pSignalSemaphoreValues entry is ignored.

Valid Usage
  • waitSemaphoreValuesCount must be the same value as VkSubmitInfo::waitSemaphoreCount, where SubmitInfo is in the pNext chain of this VkD3D12FenceSubmitInfoKHX structure.

  • signalSemaphoreValuesCount must be the same value as VkSubmitInfo::signalSemaphoreCount, where SubmitInfo is in the pNext chain of this VkD3D12FenceSubmitInfoKHX structure.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_D3D12_FENCE_SUBMIT_INFO_KHX

  • If waitSemaphoreValuesCount is not 0, and pWaitSemaphoreValues is not NULL, pWaitSemaphoreValues must be a pointer to an array of waitSemaphoreValuesCount uint64_t values

  • If signalSemaphoreValuesCount is not 0, and pSignalSemaphoreValues is not NULL, pSignalSemaphoreValues must be a pointer to an array of signalSemaphoreValuesCount uint64_t values

When submitting work that operates on memory imported from a Direct3D 11 resource to a queue, the keyed mutex mechanism may be used in addition to Vulkan semaphores to synchronize the work. Keyed mutexes are a property of a properly created shareable Direct3D 11 resource. They can only be used if the imported resource was created with the D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag.

To acquire keyed mutexes before submitted work and/or release them after, add a VkWin32KeyedMutexAcquireReleaseInfoKHX structure to the pNext chain of the VkSubmitInfo structure.

The VkWin32KeyedMutexAcquireReleaseInfoKHX structure is defined as:

typedef struct VkWin32KeyedMutexAcquireReleaseInfoKHX {
    VkStructureType          sType;
    const void*              pNext;
    uint32_t                 acquireCount;
    const VkDeviceMemory*    pAcquireSyncs;
    const uint64_t*          pAcquireKeys;
    const uint32_t*          pAcquireTimeouts;
    uint32_t                 releaseCount;
    const VkDeviceMemory*    pReleaseSyncs;
    const uint64_t*          pReleaseKeys;
} VkWin32KeyedMutexAcquireReleaseInfoKHX;
  • acquireCount is the number of entries in the pAcquireSyncs, pAcquireKeys, and pAcquireTimeoutMilliseconds arrays.

  • pAcquireSyncs is a pointer to an array of VkDeviceMemory objects which were imported from Direct3D 11 resources.

  • pAcquireKeys is a pointer to an array of mutex key values to wait for prior to beginning the submitted work. Entries refer to the keyed mutex associated with the corresponding entries in pAcquireSyncs.

  • pAcquireTimeoutMilliseconds is an array of timeout values, in millisecond units, for each acquire specified in pAcquireKeys.

  • releaseCount is the number of entries in the pReleaseSyncs and pReleaseKeys arrays.

  • pReleaseSyncs is a pointer to an array of VkDeviceMemory objects which were imported from Direct3D 11 resources.

  • pReleaseKeys is a pointer to an array of mutex key values to set when the submitted work has completed. Entries refer to the keyed mutex associated with the corresponding entries in pReleaseSyncs.

Valid Usage
  • Each member of pAcquireSyncs and pReleaseSyncs must be a device memory object imported by setting VkImportMemoryWin32HandleInfoKHX::handleType to VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_BIT_KHX or VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_KMT_BIT_KHX.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_WIN32_KEYED_MUTEX_ACQUIRE_RELEASE_INFO_KHX

  • If acquireCount is not 0, pAcquireSyncs must be a pointer to an array of acquireCount valid VkDeviceMemory handles

  • If acquireCount is not 0, pAcquireKeys must be a pointer to an array of acquireCount uint64_t values

  • If acquireCount is not 0, pAcquireTimeouts must be a pointer to an array of acquireCount uint32_t values

  • If releaseCount is not 0, pReleaseSyncs must be a pointer to an array of releaseCount valid VkDeviceMemory handles

  • If releaseCount is not 0, pReleaseKeys must be a pointer to an array of releaseCount uint64_t values

  • Both of the elements of pAcquireSyncs, and the elements of pReleaseSyncs that are valid handles must have been created, allocated, or retrieved from the same VkDevice

When submitting work that operates on memory imported from a Direct3D 11 resource to a queue, the keyed mutex mechanism may be used in addition to Vulkan semaphores to synchronize the work. Keyed mutexes are a property of a properly created shareable Direct3D 11 resource. They can only be used if the imported resource was created with the D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag.

To acquire keyed mutexes before submitted work and/or release them after, add a VkWin32KeyedMutexAcquireReleaseInfoNV structure to the pNext chain of the VkSubmitInfo structure.

The VkWin32KeyedMutexAcquireReleaseInfoNV structure is defined as:

typedef struct VkWin32KeyedMutexAcquireReleaseInfoNV {
    VkStructureType          sType;
    const void*              pNext;
    uint32_t                 acquireCount;
    const VkDeviceMemory*    pAcquireSyncs;
    const uint64_t*          pAcquireKeys;
    const uint32_t*          pAcquireTimeoutMilliseconds;
    uint32_t                 releaseCount;
    const VkDeviceMemory*    pReleaseSyncs;
    const uint64_t*          pReleaseKeys;
} VkWin32KeyedMutexAcquireReleaseInfoNV;
  • acquireCount is the number of entries in the pAcquireSyncs, pAcquireKeys, and pAcquireTimeoutMilliseconds arrays.

  • pAcquireSyncs is a pointer to an array of VkDeviceMemory objects which were imported from Direct3D 11 resources.

  • pAcquireKeys is a pointer to an array of mutex key values to wait for prior to beginning the submitted work. Entries refer to the keyed mutex associated with the corresponding entries in pAcquireSyncs.

  • pAcquireTimeoutMilliseconds is an array of timeout values, in millisecond units, for each acquire specified in pAcquireKeys.

  • releaseCount is the number of entries in the pReleaseSyncs and pReleaseKeys arrays.

  • pReleaseSyncs is a pointer to an array of VkDeviceMemory objects which were imported from Direct3D 11 resources.

  • pReleaseKeys is a pointer to an array of mutex key values to set when the submitted work has completed. Entries refer to the keyed mutex associated with the corresponding entries in pReleaseSyncs.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_WIN32_KEYED_MUTEX_ACQUIRE_RELEASE_INFO_NV

  • If acquireCount is not 0, pAcquireSyncs must be a pointer to an array of acquireCount valid VkDeviceMemory handles

  • If acquireCount is not 0, pAcquireKeys must be a pointer to an array of acquireCount uint64_t values

  • If acquireCount is not 0, pAcquireTimeoutMilliseconds must be a pointer to an array of acquireCount uint32_t values

  • If releaseCount is not 0, pReleaseSyncs must be a pointer to an array of releaseCount valid VkDeviceMemory handles

  • If releaseCount is not 0, pReleaseKeys must be a pointer to an array of releaseCount uint64_t values

  • Both of the elements of pAcquireSyncs, and the elements of pReleaseSyncs that are valid handles must have been created, allocated, or retrieved from the same VkDevice

If the pNext list of VkSubmitInfo includes a VkDeviceGroupSubmitInfoKHX structure, then that structure includes device indices and masks specifying which physical devices execute semaphore operations and command buffers.

The VkDeviceGroupSubmitInfoKHX structure is defined as:

typedef struct VkDeviceGroupSubmitInfoKHX {
    VkStructureType    sType;
    const void*        pNext;
    uint32_t           waitSemaphoreCount;
    const uint32_t*    pWaitSemaphoreDeviceIndices;
    uint32_t           commandBufferCount;
    const uint32_t*    pCommandBufferDeviceMasks;
    uint32_t           signalSemaphoreCount;
    const uint32_t*    pSignalSemaphoreDeviceIndices;
} VkDeviceGroupSubmitInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • waitSemaphoreCount is the number of elements in the pWaitSemaphoreDeviceIndices array.

  • pWaitSemaphoreDeviceIndices is an array of device indices indicating which physical device executes the semaphore wait operation in the corresponding element of VkSubmitInfo::pWaitSemaphores.

  • commandBufferCount is the number of elements in the pCommandBufferDeviceMasks array.

  • pCommandBufferDeviceMasks is an array of device masks indicating which physical devices execute the command buffer in the corresponding element of VkSubmitInfo::pCommandBuffers. A physical device executes the command buffer if the corresponding bit is set in the mask.

  • signalSemaphoreCount is the number of elements in the pSignalSemaphoreDeviceIndices array.

  • pSignalSemaphoreDeviceIndices is an array of device indices indicating which physical device executes the semaphore signal operation in the corresponding element of VkSubmitInfo::pSignalSemaphores.

If this structure is not present, semaphore operations and command buffers execute on device index zero.

Valid Usage
  • waitSemaphoreCount must equal VkSubmitInfo::waitSemaphoreCount

  • commandBufferCount must equal VkSubmitInfo::commandBufferCount

  • signalSemaphoreCount must equal VkSubmitInfo::signalSemaphoreCount

  • All elements of pWaitSemaphoreDeviceIndices and pSignalSemaphoreDeviceIndices must be valid device indices

  • All elements of pCommandBufferDeviceMasks must be valid device masks

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_DEVICE_GROUP_SUBMIT_INFO_KHX

  • If waitSemaphoreCount is not 0, pWaitSemaphoreDeviceIndices must be a pointer to an array of waitSemaphoreCount uint32_t values

  • If commandBufferCount is not 0, pCommandBufferDeviceMasks must be a pointer to an array of commandBufferCount uint32_t values

  • If signalSemaphoreCount is not 0, pSignalSemaphoreDeviceIndices must be a pointer to an array of signalSemaphoreCount uint32_t values

5.6. Queue Forward Progress

The application must ensure that command buffer submissions will be able to complete without any subsequent operations by the application on any queue. After any call to vkQueueSubmit, for every queued wait on a semaphore there must be a prior signal of that semaphore that will not be consumed by a different wait on the semaphore.

Command buffers in the submission can include vkCmdWaitEvents commands that wait on events that will not be signaled by earlier commands in the queue. Such events must be signaled by the application using vkSetEvent, and the vkCmdWaitEvents commands that wait upon them must not be inside a render pass instance. Implementations may have limits on how long the command buffer will wait, in order to avoid interfering with progress of other clients of the device. If the event is not signaled within these limits, results are undefined and may include device loss.

5.7. Secondary Command Buffer Execution

A secondary command buffer must not be directly submitted to a queue. Instead, secondary command buffers are recorded to execute as part of a primary command buffer with the command:

void vkCmdExecuteCommands(
    VkCommandBuffer                             commandBuffer,
    uint32_t                                    commandBufferCount,
    const VkCommandBuffer*                      pCommandBuffers);
  • commandBuffer is a handle to a primary command buffer that the secondary command buffers are executed in.

  • commandBufferCount is the length of the pCommandBuffers array.

  • pCommandBuffers is an array of secondary command buffer handles, which are recorded to execute in the primary command buffer in the order they are listed in the array.

If any element of pCommandBuffers was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT flag, and it was recorded into any other primary command buffer which is currently in the executable or recording state, that primary command buffer becomes invalid.

Valid Usage
  • commandBuffer must have been allocated with a level of VK_COMMAND_BUFFER_LEVEL_PRIMARY

  • Any given element of pCommandBuffers must have been allocated with a level of VK_COMMAND_BUFFER_LEVEL_SECONDARY

  • Any given element of pCommandBuffers must be in the pending or executable state.

  • If any element of pCommandBuffers was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT flag, and it was recorded into any other primary command buffer, that primary command buffer must not be in the pending state

  • If any given element of pCommandBuffers was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT flag, it must not be in the pending state.

  • If any given element of pCommandBuffers was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT flag, it must not have already been recorded to commandBuffer.

  • If any given element of pCommandBuffers was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT flag, it must not appear more than once in pCommandBuffers.

  • Any given element of pCommandBuffers must have been allocated from a VkCommandPool that was created for the same queue family as the VkCommandPool from which commandBuffer was allocated

  • If vkCmdExecuteCommands is being called within a render pass instance, that render pass instance must have been begun with the contents parameter of vkCmdBeginRenderPass set to VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS

  • If vkCmdExecuteCommands is being called within a render pass instance, any given element of pCommandBuffers must have been recorded with the VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT

  • If vkCmdExecuteCommands is being called within a render pass instance, any given element of pCommandBuffers must have been recorded with VkCommandBufferInheritanceInfo::subpass set to the index of the subpass which the given command buffer will be executed in

  • If vkCmdExecuteCommands is being called within a render pass instance, the render passes specified in the pname::pBeginInfo::pInheritanceInfo::renderPass members of the vkBeginCommandBuffer commands used to begin recording each element of pCommandBuffers must be compatible with the current render pass.

  • If vkCmdExecuteCommands is being called within a render pass instance, and any given element of pCommandBuffers was recorded with VkCommandBufferInheritanceInfo::framebuffer not equal to VK_NULL_HANDLE, that VkFramebuffer must match the VkFramebuffer used in the current render pass instance

  • If vkCmdExecuteCommands is not being called within a render pass instance, any given element of pCommandBuffers must not have been recorded with the VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT

  • If the inherited queries feature is not enabled, commandBuffer must not have any queries active

  • If commandBuffer has a VK_QUERY_TYPE_OCCLUSION query active, then each element of pCommandBuffers must have been recorded with VkCommandBufferInheritanceInfo::occlusionQueryEnable set to VK_TRUE

  • If commandBuffer has a VK_QUERY_TYPE_OCCLUSION query active, then each element of pCommandBuffers must have been recorded with VkCommandBufferInheritanceInfo::queryFlags having all bits set that are set for the query

  • If commandBuffer has a VK_QUERY_TYPE_PIPELINE_STATISTICS query active, then each element of pCommandBuffers must have been recorded with VkCommandBufferInheritanceInfo::pipelineStatistics having all bits set that are set in the VkQueryPool the query uses

  • Any given element of pCommandBuffers must not begin any query types that are active in commandBuffer

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • pCommandBuffers must be a pointer to an array of commandBufferCount valid VkCommandBuffer handles

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support transfer, graphics, or compute operations

  • commandBuffer must be a primary VkCommandBuffer

  • commandBufferCount must be greater than 0

  • Both of commandBuffer, and the elements of pCommandBuffers must have been created, allocated, or retrieved from the same VkDevice

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary

Both

Transfer
graphics
compute

5.8. Command Buffer Device Mask

Each command buffer has a piece of state storing the current device mask of the command buffer. This mask controls which physical devices within the logical device all subsequent commands will execute on, including state-setting commands, action commands, and synchronization commands.

Scissor and viewport state can be set to different values on each physical device (only when set as dynamic state), and each physical device will render using its local copy of the state. Other state is shared between physical devices, such that all physical devices use the most recently set values for the state. However, when recording an action command that uses a piece of state, the most recent command that set that state must have included all physical devices that execute the action command in its current device mask.

The command buffer’s device mask is orthogonal to the pCommandBufferDeviceMasks member of VkDeviceGroupSubmitInfoKHX. Commands only execute on a physical device if the device index is set in both device masks.

If the pNext list of VkCommandBufferBeginInfo includes a VkDeviceGroupCommandBufferBeginInfoKHX structure, then that structure includes an initial device mask for the command buffer.

The VkDeviceGroupCommandBufferBeginInfoKHX structure is defined as:

typedef struct VkDeviceGroupCommandBufferBeginInfoKHX {
    VkStructureType    sType;
    const void*        pNext;
    uint32_t           deviceMask;
} VkDeviceGroupCommandBufferBeginInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • deviceMask is the initial value of the command buffer’s device mask.

The initial device mask also acts as an upper bound on the set of devices that can ever be in the device mask in the command buffer.

If this structure is not present, the initial value of a command buffer’s device mask is set to include all physical devices in the logical device when the command buffer begins recording.

Valid Usage
  • deviceMask must be a valid device mask value

  • deviceMask must not be zero

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_DEVICE_GROUP_COMMAND_BUFFER_BEGIN_INFO_KHX

To update the current device mask of a command buffer, call:

void vkCmdSetDeviceMaskKHX(
    VkCommandBuffer                             commandBuffer,
    uint32_t                                    deviceMask);
  • commandBuffer is command buffer whose current device mask is modified.

  • deviceMask is the new value of the current device mask.

deviceMask is used to filter out subsequent commands from executing on all physical devices whose bit indices are not set in the mask.

Valid Usage
  • deviceMask must be a valid device mask value

  • deviceMask must not be zero

  • deviceMask must not include any set bits that were not in the VkDeviceGroupCommandBufferBeginInfoKHX::deviceMask value when the command buffer began recording.

  • If vkCmdSetDeviceMaskKHX is called inside a render pass instance, deviceMask must not include any set bits that were not in the VkDeviceGroupRenderPassBeginInfoKHX::deviceMask value when the render pass instance began recording.

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support graphics, compute, or transfer operations

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary
Secondary

Both

Graphics
compute
transfer

6. Synchronization and Cache Control

Synchronization of access to resources is primarily the responsibility of the application in Vulkan. The order of execution of commands with respect to the host and other commands on the device has few implicit guarantees, and needs to be explicitly specified. Memory caches and other optimizations are also explicitly managed, requiring that the flow of data through the system is largely under application control.

Whilst some implicit guarantees exist between commands, four explicit synchronization primitives are exposed by Vulkan:

Fences

Fences can be used to communicate to the host that execution of some task on the device has completed.

Semaphores

Semaphores can be used to control resource access across multiple queues.

Events

Events provide a fine-grained synchronization primitive which can be signaled either within a command buffer or by the host, and can be waited upon within a command buffer or queried on the host.

Pipeline Barriers

Pipeline barriers also provide synchronization control within a command buffer, but at a single point, rather than with separate signal and wait operations.

In addition to the base primitives provided here, Render Passes provide a useful synchronization framework for most rendering tasks, built upon the concepts in this chapter. Many cases that would otherwise need an application to use synchronization primitives in this chapter can be expressed more efficiently as part of a render pass.

6.1. Execution and Memory Dependencies

An operation is an arbitrary amount of work to be executed on the host, a device, or an external entity such as a presentation engine. Synchronization commands introduce explicit execution dependencies, and memory dependencies between two sets of operations defined by the command’s two synchronization scopes.

The synchronization scopes define which other operations a synchronization command is able to create execution dependencies with. Any type of operation that is not in a synchronization command’s synchronization scopes will not be included in the resulting dependency. For example, for many synchronization commands, the synchronization scopes can be limited to just operations executing in specific pipeline stages, which allows other pipeline stages to be excluded from a dependency. Other scoping options are possible, depending on the particular command.

An execution dependency is a guarantee that for two sets of operations, the first set must happen-before the second set. If an operation happens-before another operation, then the first operation must complete before the second operation is initiated. More precisely:

  • Let A and B be separate sets of operations.

  • Let S be a synchronization command.

  • Let AS and BS be the synchronization scopes of S.

  • Let A' be the intersection of sets A and AS.

  • Let B' be the intersection of sets B and BS.

  • Submitting A, S and B for execution, in that order, will result in execution dependency E between A' and B'.

  • Execution dependency E guarantees that A' happens-before B'.

An execution dependency chain is a sequence of execution dependencies that form a happens-before relation between the first dependency’s A' and the final dependency’s B'. For each consecutive pair of execution dependencies, a chain exists if the intersection of BS in the first dependency and AS in the second dependency is not an empty set. The formation of a single execution dependency from an execution dependency chain can be described by substituting the following in the description of execution dependencies:

  • Let S be a set of synchronization commands that generate an execution dependency chain.

  • Let AS be the first synchronization scope of the first command in S.

  • Let BS be the second synchronization scope of the last command in S.

Note

An execution dependency is inherently also multiple execution dependencies - a dependency exists between each subset of A' and each subset of B', and the same is true for execution dependency chains. For example, a synchronization command with multiple pipeline stages in its stage masks effectively generates one dependency between each source stage and each destination stage. This can be useful to think about when considering how execution chains are formed if they do not involve all parts of a synchronization command’s dependency. Similarly, any set of adjacent dependencies in an execution dependency chain can be considered an execution dependency chain in its own right.

Execution dependencies alone are not sufficient to guarantee that values resulting from writes in one set of operations can be read from another set of operations.

Two additional types of operation are used to control memory access. Availability operations cause the values generated by specified memory write accesses to become available for future access. Any available value remains available until a subsequent write to the same memory location occurs (whether it is made available or not) or the memory is freed. Visibility operations cause any available values to become visible to specified memory accesses.

A memory dependency is an execution dependency which includes availability and visibility operations such that:

  • The first set of operations happens-before the availability operation.

  • The availability operation happens-before the visibility operation.

  • The visibility operation happens-before the second set of operations.

Once written values are made visible to a particular type of memory access, they can be read or written by that type of memory access. Most synchronization commands in Vulkan define a memory dependency.

The specific memory accesses that are made available and visible are defined by the access scopes of a memory dependency. Any type of access that is in a memory dependency’s first access scope and occurs in A' is made available. Any type of access that is in a memory dependency’s second access scope and occurs in B' has any available writes made visible to it. Any type of operation that is not in a synchronization command’s access scopes will not be included in the resulting dependency.

A memory dependency enforces availability and visibility of memory accesses and execution order between two sets of operations. Adding to the description of execution dependency chains:

  • Let a be the set of memory accesses performed by A'.

  • Let b be the set of memory accesses performed by B'.

  • Let aS be the first access scope of the first command in S.

  • Let bS be the second access scope of the last command in S.

  • Let a' be the intersection of sets a and aS.

  • Let b' be the intersection of sets b and bS.

  • Submitting A, S and B for execution, in that order, will result in a memory dependency m between A' and B'.

  • Memory dependency m guarantees that:

    • Memory writes in a' are made available.

    • Available memory writes, including those from a', are made visible to b'.

Note

Execution and memory dependencies are used to solve data hazards, i.e. to ensure that read and write operations occur in a well-defined order. Write-after-read hazards can be solved with just an execution dependency, but read-after-write and write-after-write hazards need appropriate memory dependencies to be included between them. If an application does not include dependencies to solve these hazards, the results and execution orders of memory accesses are undefined.

6.1.1. Image Layout Transitions

Image subresources can be transitioned from one layout to another as part of a memory dependency (e.g. by using an image memory barrier). When a layout transition is specified in a memory dependency, it happens-after the availability operations in the memory dependency, and happens-before the visibility operations. Image layout transitions may perform read and write accesses on all memory bound to the image subresource range, so applications must ensure that all memory writes have been made available before a layout transition is executed. Available memory is automatically made visible to a layout transition, and writes performed by a layout transition are automatically made available.

Layout transitions always apply to a particular image subresource range, and specify both an old layout and new layout. If the old layout does not match the new layout, a transition occurs. The old layout must match the current layout of the image subresource range, with one exception. The old layout can always be specified as VK_IMAGE_LAYOUT_UNDEFINED, though doing so invalidates the contents of the image subresource range.

Note

Setting the old layout to VK_IMAGE_LAYOUT_UNDEFINED implies that the contents of the image subresource need not be preserved. Implementations may use this information to avoid performing expensive data transition operations.

Note

Applications must ensure that layout transitions happen-after all operations accessing the image with the old layout, and happen-before any operations that will access the image with the new layout. Layout transitions are potentially read/write operations, so not defining appropriate memory dependencies to guarantee this will result in a data race.

The contents of any portion of another resource which aliases memory that is bound to the transitioned image subresource range are undefined after an image layout transition.

6.1.2. Pipeline Stages

The work performed by an action command consists of multiple operations, which are performed by a sequence of logically independent execution units known as pipeline stages. The exact pipeline stages executed depend on the particular action command that is used, and current command buffer state when the action command was recorded. Drawing commands, dispatching commands, copy commands, and clear commands all execute in different sets of pipeline stages.

Execution of operations across pipeline stages must adhere to implicit ordering guarantees, particularly including pipeline stage order. Otherwise, execution across pipeline stages may overlap or execute out of order with regards to other stages, unless otherwise enforced by an execution dependency.

Several of the synchronization commands include pipeline stage parameters, restricting the synchronization scopes for that command to just those stages. This allows fine grained control over the exact execution dependencies and accesses performed by action commands. Implementations should use these pipeline stages to avoid unnecessary stalls or cache flushing.

Pipeline stages are specified using a bitmask:

typedef enum VkPipelineStageFlagBits {
    VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT = 0x00000001,
    VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT = 0x00000002,
    VK_PIPELINE_STAGE_VERTEX_INPUT_BIT = 0x00000004,
    VK_PIPELINE_STAGE_VERTEX_SHADER_BIT = 0x00000008,
    VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT = 0x00000010,
    VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT = 0x00000020,
    VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT = 0x00000040,
    VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT = 0x00000080,
    VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT = 0x00000100,
    VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT = 0x00000200,
    VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT = 0x00000400,
    VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT = 0x00000800,
    VK_PIPELINE_STAGE_TRANSFER_BIT = 0x00001000,
    VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT = 0x00002000,
    VK_PIPELINE_STAGE_HOST_BIT = 0x00004000,
    VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT = 0x00008000,
    VK_PIPELINE_STAGE_ALL_COMMANDS_BIT = 0x00010000,
    VK_PIPELINE_STAGE_COMMAND_PROCESS_BIT_NVX = 0x00020000,
} VkPipelineStageFlagBits;

The meaning of each bit is:

  • VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT: Stage of the pipeline where any commands are initially received by the queue.

  • VK_PIPELINE_STAGE_COMMAND_PROCESS_BIT_NVX: Stage of the pipeline where device-side generation of commands via vkCmdProcessCommandsNVX is handled.

  • VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT: Stage of the pipeline where Draw/DispatchIndirect data structures are consumed. This stage also includes reading commands written by vkCmdProcessCommandsNVX.

  • VK_PIPELINE_STAGE_VERTEX_INPUT_BIT: Stage of the pipeline where vertex and index buffers are consumed.

  • VK_PIPELINE_STAGE_VERTEX_SHADER_BIT: Vertex shader stage.

  • VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT: Tessellation control shader stage.

  • VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT: Tessellation evaluation shader stage.

  • VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT: Geometry shader stage.

  • VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT: Fragment shader stage.

  • VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT: Stage of the pipeline where early fragment tests (depth and stencil tests before fragment shading) are performed. This stage also includes subpass load operations for framebuffer attachments with a depth/stencil format.

  • VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT: Stage of the pipeline where late fragment tests (depth and stencil tests after fragment shading) are performed. This stage also includes subpass store operations for framebuffer attachments with a depth/stencil format.

  • VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT: Stage of the pipeline after blending where the final color values are output from the pipeline. This stage also includes subpass load and store operations and multisample resolve operations for framebuffer attachments with a color format.

  • VK_PIPELINE_STAGE_TRANSFER_BIT: Execution of copy commands. This includes the operations resulting from all copy commands, clear commands (with the exception of vkCmdClearAttachments), and vkCmdCopyQueryPoolResults.

  • VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT: Execution of a compute shader.

  • VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT: Final stage in the pipeline where operations generated by all commands complete execution.

  • VK_PIPELINE_STAGE_HOST_BIT: A pseudo-stage indicating execution on the host of reads/writes of device memory. This stage is not invoked by any commands recorded in a command buffer.

  • VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT: Execution of all graphics pipeline stages. Equivalent to the logical or of:

    • VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT

    • VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT

    • VK_PIPELINE_STAGE_VERTEX_INPUT_BIT

    • VK_PIPELINE_STAGE_VERTEX_SHADER_BIT

    • VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT

    • VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

    • VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

    • VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT

    • VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT

    • VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT

    • VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT

    • VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT

  • VK_PIPELINE_STAGE_ALL_COMMANDS_BIT: Equivalent to the logical or of every other pipeline stage flag that is supported on the queue it is used with.

Note

An execution dependency with only VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT in the destination stage mask will only prevent that stage from executing in subsequently submitted commands. As this stage does not perform any actual execution, this is not observable - in effect, it does not delay processing of subsequent commands. Similarly an execution dependency with only VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT in the source stage mask will effectively not wait for any prior commands to complete.

When defining a memory dependency, using only VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT or VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT would never make any accesses available and/or visible because these stages do not access memory.

VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT and VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT are useful for accomplishing layout transitions and queue ownership operations when the required execution dependency is satisfied by other means - for example, semaphore operations between queues.

If a synchronization command includes a source stage mask, its first synchronization scope only includes execution of the pipeline stages specified in that mask, as well as any logically earlier stages. If a synchronization command includes a destination stage mask, its second synchronization scope only includes execution of the pipeline stages specified in that mask, as well as any logically later stages.

Access scopes are affected in a similar way. If a synchronization command includes a source stage mask, its first access scope only includes memory access performed by pipeline stages specified in that mask. If a synchronization command includes a destination stage mask, its second access scope only includes memory access performed by pipeline stages specified in that mask.

Note

Implementations may not support synchronization at every pipeline stage for every synchronization operation. If a pipeline stage that an implementation does not support synchronization for appears in a source stage mask, then it may substitute that stage for any logically later stage. If a pipeline stage that an implementation does not support synchronization for appears in a destination stage mask, then it may substitute that stage for any logically earlier stage.

For example, if an implementation is unable to signal an event immediately after vertex shader execution is complete, it may instead signal the event after color attachment output has completed.

If an implementation makes such a substitution, it must not affect the semantics of execution or memory dependencies or image and buffer memory barriers.

Certain pipeline stages are only available on queues that support a particular set of operations. The following table lists, for each pipeline stage flag, which queue capability flag must be supported by the queue. When multiple flags are enumerated in the second column of the table, it means that the pipeline stage is supported on the queue if it supports any of the listed capability flags. For further details on queue capabilities see Physical Device Enumeration and Queues.

Table 3. Supported pipeline stage flags
Pipeline stage flag Required queue capability flag

VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT

None required

VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT

VK_QUEUE_GRAPHICS_BIT or VK_QUEUE_COMPUTE_BIT

VK_PIPELINE_STAGE_VERTEX_INPUT_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_VERTEX_SHADER_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT

VK_QUEUE_COMPUTE_BIT

VK_PIPELINE_STAGE_TRANSFER_BIT

VK_QUEUE_GRAPHICS_BIT, VK_QUEUE_COMPUTE_BIT or VK_QUEUE_TRANSFER_BIT

VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT

None required

VK_PIPELINE_STAGE_HOST_BIT

None required

VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT

VK_QUEUE_GRAPHICS_BIT

VK_PIPELINE_STAGE_ALL_COMMANDS_BIT

None required

VK_PIPELINE_STAGE_COMMAND_PROCESS_BIT_NVX

VK_QUEUE_GRAPHICS_BIT or VK_QUEUE_COMPUTE_BIT

Pipeline stages that execute as a result of a command logically complete execution in a specific order, such that completion of a logically later pipeline stage must not happen-before completion of a logically earlier stage. This means that including any given stage in the source stage mask for a particular synchronization command also implies that any logically earlier stages are included in AS for that command.

Similarly, initiation of a logically earlier pipeline stage must not happen-after initiation of a logically later pipeline stage. Including any given stage in the destination stage mask for a particular synchronization command also implies that any logically later stages are included in BS for that command.

Note

Logically earlier/later stages are not included when defining the access scopes of a memory barrier.

The order of pipeline stages depends on the particular pipeline; graphics, compute, transfer or host.

For the graphics pipeline, the following stages occur in this order:

  • VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT

  • VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT

  • VK_PIPELINE_STAGE_VERTEX_INPUT_BIT

  • VK_PIPELINE_STAGE_VERTEX_SHADER_BIT

  • VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT

  • VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT

  • VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT

  • VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT

  • VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT

  • VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT

For the compute pipeline, the following stages occur in this order:

  • VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT

  • VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT

  • VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT

  • VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT

For the transfer pipeline, the following stages occur in this order:

  • VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT

  • VK_PIPELINE_STAGE_TRANSFER_BIT

  • VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT

For host operations, only one pipeline stage occurs, so no order is guaranteed:

  • VK_PIPELINE_STAGE_HOST_BIT

For the command processing pipeline, the following stages occur in this order:

  • VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT

  • VK_PIPELINE_STAGE_COMMAND_PROCESS_BIT_NVX

  • VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT

6.1.3. Access Types

Memory in Vulkan can be accessed from within shader invocations and via some fixed-function stages of the pipeline. The access type is a function of the descriptor type used, or how a fixed-function stage accesses memory. Each access type corresponds to a bit flag in VkAccessFlagBits.

Some synchronization commands take sets of access types as parameters to define the access scopes of a memory dependency. If a synchronization command includes a source access mask, its first access scope only includes accesses via the access types specified in that mask. Similarly, if a synchronization command includes a destination access mask, its second access scope only includes accesses via the access types specified in that mask.

Access types that can be set in an access mask include:

typedef enum VkAccessFlagBits {
    VK_ACCESS_INDIRECT_COMMAND_READ_BIT = 0x00000001,
    VK_ACCESS_INDEX_READ_BIT = 0x00000002,
    VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT = 0x00000004,
    VK_ACCESS_UNIFORM_READ_BIT = 0x00000008,
    VK_ACCESS_INPUT_ATTACHMENT_READ_BIT = 0x00000010,
    VK_ACCESS_SHADER_READ_BIT = 0x00000020,
    VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
    VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
    VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
    VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT = 0x00000200,
    VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT = 0x00000400,
    VK_ACCESS_TRANSFER_READ_BIT = 0x00000800,
    VK_ACCESS_TRANSFER_WRITE_BIT = 0x00001000,
    VK_ACCESS_HOST_READ_BIT = 0x00002000,
    VK_ACCESS_HOST_WRITE_BIT = 0x00004000,
    VK_ACCESS_MEMORY_READ_BIT = 0x00008000,
    VK_ACCESS_MEMORY_WRITE_BIT = 0x00010000,
    VK_ACCESS_COMMAND_PROCESS_READ_BIT_NVX = 0x00020000,
    VK_ACCESS_COMMAND_PROCESS_WRITE_BIT_NVX = 0x00040000,
} VkAccessFlagBits;
  • VK_ACCESS_INDIRECT_COMMAND_READ_BIT: Read access to an indirect command structure read as part of an indirect drawing or dispatch command.

  • VK_ACCESS_INDEX_READ_BIT: Read access to an index buffer as part of an indexed drawing command, bound by vkCmdBindIndexBuffer.

  • VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT: Read access to a vertex buffer as part of a drawing command, bound by vkCmdBindVertexBuffers.

  • VK_ACCESS_UNIFORM_READ_BIT: Read access to a uniform buffer.

  • VK_ACCESS_INPUT_ATTACHMENT_READ_BIT: Read access to an input attachment within a renderpass during fragment shading.

  • VK_ACCESS_SHADER_READ_BIT: Read access to a storage buffer, uniform texel buffer, storage texel buffer, sampled image, or storage image.

  • VK_ACCESS_SHADER_WRITE_BIT: Write access to a storage buffer, storage texel buffer, or storage image.

  • VK_ACCESS_COLOR_ATTACHMENT_READ_BIT: Read access to a color attachment, such as via blending, logic operations, or via certain subpass load operations.

  • VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT: Write access to a color or resolve attachment during a render pass or via certain subpass load and store operations.

  • VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT: Read access to a depth/stencil attachment, via depth or stencil operations or via certain subpass load operations.

  • VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT: Write access to a depth/stencil attachment, via depth or stencil operations or via certain subpass load and store operations.

  • VK_ACCESS_TRANSFER_READ_BIT: Read access to an image or buffer in a copy operation.

  • VK_ACCESS_TRANSFER_WRITE_BIT: Write access to an image or buffer in a clear or copy operation.

  • VK_ACCESS_HOST_READ_BIT: Read access by a host operation. Accesses of this type are not performed through a resource, but directly on memory.

  • VK_ACCESS_HOST_WRITE_BIT: Write access by a host operation. Accesses of this type are not performed through a resource, but directly on memory.

  • VK_ACCESS_MEMORY_READ_BIT: Read access via non-specific entities. These entities include the Vulkan device and host, but may also include entities external to the Vulkan device or otherwise not part of the core Vulkan pipeline. When included in a destination access mask, makes all available writes visible to all future read accesses on entities known to the Vulkan device.

  • VK_ACCESS_MEMORY_WRITE_BIT: Write access via non-specific entities. These entities include the Vulkan device and host, but may also include entities external to the Vulkan device or otherwise not part of the core Vulkan pipeline. When included in a source access mask, all writes that are performed by entities known to the Vulkan device are made available. When included in a destination access mask, makes all available writes visible to all future write accesses on entities known to the Vulkan device.

  • VK_ACCESS_COMMAND_PROCESS_READ_BIT_NVX: Reads from VkBuffer inputs to vkCmdProcessCommandsNVX.

  • VK_ACCESS_COMMAND_PROCESS_WRITE_BIT_NVX: Writes to the target command buffer in vkCmdProcessCommandsNVX.

Certain access types are only performed by a subset of pipeline stages. Any synchronization command that takes both stage masks and access masks uses both to define the access scopes - only the specified access types performed by the specified stages are included in the access scope. An application must not specify an access flag in a synchronization command if it does not include a pipeline stage in the corresponding stage mask that is able to perform accesses of that type. The following table lists, for each access flag, which pipeline stages can perform that type of access.

Table 4. Supported access types
Access flag Supported pipeline stages

VK_ACCESS_INDIRECT_COMMAND_READ_BIT

VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT

VK_ACCESS_INDEX_READ_BIT

VK_PIPELINE_STAGE_VERTEX_INPUT_BIT

VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT

VK_PIPELINE_STAGE_VERTEX_INPUT_BIT

VK_ACCESS_UNIFORM_READ_BIT

VK_PIPELINE_STAGE_VERTEX_SHADER_BIT, VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT, VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT, VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT, VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT, or VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT

VK_ACCESS_INPUT_ATTACHMENT_READ_BIT

VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT

VK_ACCESS_SHADER_READ_BIT

VK_PIPELINE_STAGE_VERTEX_SHADER_BIT, VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT, VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT, VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT, VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT, or VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT

VK_ACCESS_SHADER_WRITE_BIT

VK_PIPELINE_STAGE_VERTEX_SHADER_BIT, VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT, VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT, VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT, VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT, or VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT

VK_ACCESS_COLOR_ATTACHMENT_READ_BIT

VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT

VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT

VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT

VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT

VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT, or VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT

VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT

VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT, or VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT

VK_ACCESS_TRANSFER_READ_BIT

VK_PIPELINE_STAGE_TRANSFER_BIT

VK_ACCESS_TRANSFER_WRITE_BIT

VK_PIPELINE_STAGE_TRANSFER_BIT

VK_ACCESS_HOST_READ_BIT

VK_PIPELINE_STAGE_HOST_BIT

VK_ACCESS_HOST_WRITE_BIT

VK_PIPELINE_STAGE_HOST_BIT

VK_ACCESS_MEMORY_READ_BIT

N/A

VK_ACCESS_MEMORY_WRITE_BIT

N/A

VK_ACCESS_COMMAND_PROCESS_READ_BIT_NVX

VK_PIPELINE_STAGE_COMMAND_PROCESS_BIT_NVX

VK_ACCESS_COMMAND_PROCESS_WRITE_BIT_NVX

VK_PIPELINE_STAGE_COMMAND_PROCESS_BIT_NVX

If a memory object does not have the VK_MEMORY_PROPERTY_HOST_COHERENT_BIT property, then vkFlushMappedMemoryRanges must be called in order to guarantee that writes to the memory object from the host are made visible to the VK_ACCESS_HOST_WRITE_BIT access type, where it can be further made available to the device by synchronization commands. Similarly, vkInvalidateMappedMemoryRanges must be called to guarantee that writes which are visible to the VK_ACCESS_HOST_READ_BIT access type are made visible to host operations.

If the memory object does have the VK_MEMORY_PROPERTY_HOST_COHERENT_BIT property flag, writes to the memory object from the host are automatically made visible to the VK_ACCESS_HOST_WRITE_BIT access type. Similarly, writes made visible to the VK_ACCESS_HOST_READ_BIT access type are automatically made visible to the host.

Note

The vkQueueSubmit command automatically guarantees that host writes flushed to VK_ACCESS_HOST_WRITE_BIT are made available if they were flushed before the command executed, so in most cases an explicit memory barrier is not needed for this case. In the few circumstances where a submit does not occur between the host write and the device read access, writes can be made available by using an explicit memory barrier.

6.1.4. Framebuffer Region Dependencies

Pipeline stages that operate on, or with respect to, the framebuffer are collectively the framebuffer-space pipeline stages. These stages are:

  • VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT

  • VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT

  • VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT

  • VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT

For these pipeline stages, an execution or memory dependency from the first set of operations to the second set can either be a single framebuffer-global dependency, or split into multiple framebuffer-local dependencies. A dependency with non-framebuffer-space pipeline stages is neither framebuffer-global nor framebuffer-local.

A framebuffer region is a set of sample (x, y, layer, sample) coordinates that is a subset of the entire framebuffer.

A single framebuffer-local dependency guarantees that only for a single framebuffer region, the first set of operations and availability operations happen-before visibility operations and the second set of operations. No ordering guarantees are made between framebuffer regions for a framebuffer-local dependency.

A framebuffer-global dependency guarantees that the first set of operations for all framebuffer regions happens-before the second set of operations for any framebuffer region.

Note

Since fragment invocations are not specified to run in any particular groupings, the size of a framebuffer region is implementation-dependent, not known to the application, and must be assumed to be no larger than a single sample.

If a synchronization command includes a dependencyFlags parameter, and specifies the VK_DEPENDENCY_BY_REGION_BIT flag, then it defines framebuffer-local dependencies for the framebuffer-space pipeline stages in that synchronization command, for all framebuffer regions. If no dependencyFlags parameter is included, or the VK_DEPENDENCY_BY_REGION_BIT flag is not specified, then a framebuffer-global dependency is specified for those stages. The VK_DEPENDENCY_BY_REGION_BIT flag does not affect the dependencies between non-framebuffer-space pipeline stages, nor does it affect the dependencies between framebuffer-space and non-framebuffer-space pipeline stages.

Note

Framebuffer-local dependencies are more optimal for most architectures; particularly tile-based architectures - which can keep framebuffer-regions entirely in on-chip registers and thus avoid external bandwidth across such a dependency. Including a framebuffer-global dependency in your rendering will usually force all implementations to flush data to memory, or to a higher level cache, breaking any potential locality optimizations.

6.1.5. View-Local Dependencies

In a render pass instance that has multiview enabled, dependencies can be either view-local or view-global.

A view-local dependency only includes operations from a single source view from the source subpass in the first synchronization scope, and only includes operations from a single destination view from the destination subpass in the second synchronization scope. A view-global dependency includes all views in the view mask of the source and destination subpasses in the corresponding synchronization scopes.

If a synchronization command includes a dependencyFlags parameter and specifies the VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX flag, then it defines view-local dependencies for that synchronization command, for all views. If no dependencyFlags parameter is included or the VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX flag is not specified, then a view-global dependency is specified.

6.1.6. Device-Local Dependencies

Dependencies can be either device-local or non-device-local. A device-local dependency acts as multiple separate dependencies, one for each physical device that executes the synchronization command, where each dependency only includes operations from that physical device in both synchronization scopes. A non-device-local dependency is a single dependency where both synchronization scopes include operations from all physical devices that participate in the synchronization command. For subpass dependencies, all physical devices in the VkDeviceGroupRenderPassBeginInfoKHX::deviceMask participate in the dependency, and for pipeline barriers all physical devices that are set in the command buffer’s current device mask participate in the dependency.

If a synchronization command includes a dependencyFlags parameter and specifies the VK_DEPENDENCY_DEVICE_GROUP_BIT_KHX flag, then it defines a non-device-local dependency for that synchronization command. If no dependencyFlags parameter is included or the VK_DEPENDENCY_DEVICE_GROUP_BIT_KHX flag is not specified, then it defines device-local dependencies for that synchronization command, for all participating physical devices.

Semaphore and event dependencies are device-local and only execute on the one physical device that performs the dependency.

6.2. Implicit Synchronization Guarantees

A small number of implicit ordering guarantees are provided by Vulkan, ensuring that the order in which commands are submitted is meaningful, and avoiding unnecessary complexity in common operations.

Submission order is a fundamental ordering in Vulkan, giving meaning to the order in which action and synchronization commands are recorded and submitted to a single queue. Explicit and implicit ordering guarantees between commands in Vulkan all work on the premise that this ordering is meaningful.

Submission order for any given set of commands is based on the order in which they were recorded to command buffers and then submitted. This order is determined as follows:

  1. The initial order is determined by the order in which vkQueueSubmit commands are executed on the host, for a single queue, from first to last.

  2. The order in which VkSubmitInfo structures are specified in the pSubmits parameter of vkQueueSubmit, from lowest index to highest.

  3. The order in which command buffers are specified in the pCommandBuffers member of VkSubmitInfo, from lowest index to highest.

  4. The order in which commands were recorded to a command buffer on the host, from first to last:

    • For commands recorded outside a render pass, this includes all other commands recorded outside a renderpass, including vkCmdBeginRenderPass and vkCmdEndRenderPass commands; it does not directly include commands inside a render pass.

    • For commands recorded inside a render pass, this includes all other commands recorded inside the same subpass, including the vkCmdBeginRenderPass and vkCmdEndRenderPass commands that delimit the same renderpass instance; it does not include commands recorded to other subpasses.

Action and synchronization commands recorded to a command buffer execute the VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT pipeline stage in submission order - forming an implicit execution dependency between this stage in each command.

State commands do not execute any operations on the device, instead they set the state of the command buffer when they execute on the host, in the order that they are recorded. Action commands consume the current state of the command buffer when they are recorded, and will execute state changes on the device as required to match the recorded state.

Execution of pipeline stages within a given command also has a loose ordering, dependent only on a single command.

6.3. Fences

Fences are a synchronization primitive that can be used to insert a dependency from a queue to the host. Fences have two states - signaled and unsignaled. A fence can be signaled as part of the execution of a queue submission command. Fences can be unsignaled on the host with vkResetFences. Fences can be waited on by the host with the vkWaitForFences command, and the current state can be queried with vkGetFenceStatus.

Fences are represented by VkFence handles:

VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkFence)

To create a fence, call:

VkResult vkCreateFence(
    VkDevice                                    device,
    const VkFenceCreateInfo*                    pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkFence*                                    pFence);
  • device is the logical device that creates the fence.

  • pCreateInfo is a pointer to an instance of the VkFenceCreateInfo structure which contains information about how the fence is to be created.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pFence points to a handle in which the resulting fence object is returned.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pCreateInfo must be a pointer to a valid VkFenceCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pFence must be a pointer to a VkFence handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The VkFenceCreateInfo structure is defined as:

typedef struct VkFenceCreateInfo {
    VkStructureType       sType;
    const void*           pNext;
    VkFenceCreateFlags    flags;
} VkFenceCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags defines the initial state and behavior of the fence. Bits which can be set include:

    typedef enum VkFenceCreateFlagBits {
        VK_FENCE_CREATE_SIGNALED_BIT = 0x00000001,
    } VkFenceCreateFlagBits;

    If flags contains VK_FENCE_CREATE_SIGNALED_BIT then the fence object is created in the signaled state; otherwise it is created in the unsignaled state.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_FENCE_CREATE_INFO

  • pNext must be NULL

  • flags must be a valid combination of VkFenceCreateFlagBits values

To destroy a fence, call:

void vkDestroyFence(
    VkDevice                                    device,
    VkFence                                     fence,
    const VkAllocationCallbacks*                pAllocator);
  • device is the logical device that destroys the fence.

  • fence is the handle of the fence to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

Valid Usage
  • All queue submission commands that refer to fence must have completed execution

  • If VkAllocationCallbacks were provided when fence was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when fence was created, pAllocator must be NULL

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • If fence is not VK_NULL_HANDLE, fence must be a valid VkFence handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • If fence is a valid handle, it must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to fence must be externally synchronized

To query the status of a fence from the host, call:

VkResult vkGetFenceStatus(
    VkDevice                                    device,
    VkFence                                     fence);
  • device is the logical device that owns the fence.

  • fence is the handle of the fence to query.

Upon success, vkGetFenceStatus returns the status of the fence object, with the following return codes:

Table 5. Fence Object Status Codes
Status Meaning

VK_SUCCESS

The fence specified by fence is signaled.

VK_NOT_READY

The fence specified by fence is unsignaled.

VK_DEVICE_LOST

The device has been lost. See Lost Device.

If a queue submission command is pending execution, then the value returned by this command may immediately be out of date.

If the device has been lost (see Lost Device), vkGetFenceStatus may return any of the above status codes. If the device has been lost and vkGetFenceStatus is called repeatedly, it will eventually return either VK_SUCCESS or VK_DEVICE_LOST.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • fence must be a valid VkFence handle

  • fence must have been created, allocated, or retrieved from device

Return Codes
Success
  • VK_SUCCESS

  • VK_NOT_READY

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_DEVICE_LOST

To set the state of fences to unsignaled from the host, call:

VkResult vkResetFences(
    VkDevice                                    device,
    uint32_t                                    fenceCount,
    const VkFence*                              pFences);
  • device is the logical device that owns the fences.

  • fenceCount is the number of fences to reset.

  • pFences is a pointer to an array of fence handles to reset.

When vkResetFences is executed on the host, it defines a fence unsignal operation for each fence, which resets the fence to the unsignaled state.

If any member of pFences is already in the unsignaled state when vkResetFences is executed, then vkResetFences has no effect on that fence.

Valid Usage
  • Any given element of pFences must not currently be associated with any queue command that has not yet completed execution on that queue

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pFences must be a pointer to an array of fenceCount valid VkFence handles

  • fenceCount must be greater than 0

  • Each element of pFences must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to each member of pFences must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

When a fence is submitted to a queue as part of a queue submission command, it defines a memory dependency on the batches that were submitted as part of that command, and defines a fence signal operation which sets the fence to the signaled state.

The first synchronization scope includes every batch submitted in the same queue submission command. Fence signal operations that are defined by vkQueueSubmit additionally include in the first synchronization scope all previous queue submissions to the same queue via vkQueueSubmit.

The second synchronization scope only includes the fence signal operation.

The first access scope includes all memory access performed by the device.

The second access scope is empty.

To wait for one or more fences to enter the signaled state on the host, call:

VkResult vkWaitForFences(
    VkDevice                                    device,
    uint32_t                                    fenceCount,
    const VkFence*                              pFences,
    VkBool32                                    waitAll,
    uint64_t                                    timeout);
  • device is the logical device that owns the fences.

  • fenceCount is the number of fences to wait on.

  • pFences is a pointer to an array of fenceCount fence handles.

  • waitAll is the condition that must be satisfied to successfully unblock the wait. If waitAll is VK_TRUE, then the condition is that all fences in pFences are signaled. Otherwise, the condition is that at least one fence in pFences is signaled.

  • timeout is the timeout period in units of nanoseconds. timeout is adjusted to the closest value allowed by the implementation-dependent timeout accuracy, which may be substantially longer than one nanosecond, and may be longer than the requested period.

If the condition is satisfied when vkWaitForFences is called, then vkWaitForFences returns immediately. If the condition is not satisfied at the time vkWaitForFences is called, then vkWaitForFences will block and wait up to timeout nanoseconds for the condition to become satisfied.

If timeout is zero, then vkWaitForFences does not wait, but simply returns the current state of the fences. VK_TIMEOUT will be returned in this case if the condition is not satisfied, even though no actual wait was performed.

If the specified timeout period expires before the condition is satisfied, vkWaitForFences returns VK_TIMEOUT. If the condition is satisfied before timeout nanoseconds has expired, vkWaitForFences returns VK_SUCCESS.

If device loss occurs (see Lost Device) before the timeout has expired, vkWaitForFences must return in finite time with either VK_SUCCESS or VK_DEVICE_LOST.

Note

While we guarantee that vkWaitForFences must return in finite time, no guarantees are made that it returns immediately upon device loss. However, the client can reasonably expect that the delay will be on the order of seconds and that calling vkWaitForFences will not result in a permanently (or seemingly permanently) dead process.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pFences must be a pointer to an array of fenceCount valid VkFence handles

  • fenceCount must be greater than 0

  • Each element of pFences must have been created, allocated, or retrieved from device

Return Codes
Success
  • VK_SUCCESS

  • VK_TIMEOUT

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_DEVICE_LOST

An execution dependency is defined by waiting for a fence to become signaled, either via vkWaitForFences or by polling on vkGetFenceStatus.

The first synchronization scope includes only the fence signal operation.

The second synchronization scope includes the host operations of vkWaitForFences or vkGetFenceStatus indicating that the fence has become signaled.

Note

Signaling a fence and waiting on the host does not guarantee that the results of memory accesses will be visible to the host, as the access scope of a memory dependency defined by a fence only includes device access. A memory barrier or other memory dependency must be used to guarantee this. See the description of host access types for more information.

6.3.1. Alternate Methods to Signal Fences

Besides submitting a fence to a queue as part of a queue submission command, a fence may also be signaled when a particular event occurs on a device or display.

To create a fence that will be signaled when an event occurs on a device, call:

VkResult vkRegisterDeviceEventEXT(
    VkDevice                                    device,
    const VkDeviceEventInfoEXT*                 pDeviceEventInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkFence*                                    pFence);
  • device is a logical device on which the event may occur.

  • pDeviceEventInfo is a pointer to an instance of the VkDeviceEventInfoEXT structure describing the event of interest to the application.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pFence points to a handle in which the resulting fence object is returned.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pDeviceEventInfo must be a pointer to a valid VkDeviceEventInfoEXT structure

  • pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pFence must be a pointer to a VkFence handle

Return Codes
Success
  • VK_SUCCESS

The VkDeviceEventInfoEXT structure is defined as:

typedef struct VkDeviceEventInfoEXT {
    VkStructureType         sType;
    const void*             pNext;
    VkDeviceEventTypeEXT    deviceEvent;
} VkDeviceEventInfoEXT;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • device specifies when the fence will be signaled. Possible values are:

    typedef enum VkDeviceEventTypeEXT {
        VK_DEVICE_EVENT_TYPE_DISPLAY_HOTPLUG_EXT = 0,
    } VkDeviceEventTypeEXT;
    • VK_DEVICE_EVENT_TYPE_DISPLAY_HOTPLUG_EXT occurs when a display is plugged into or unplugged from the specified device. Applications can use this notification to determine when they need to re-enumerate the available displays on a device.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_DEVICE_EVENT_INFO_EXT

  • pNext must be NULL

  • deviceEvent must be a valid VkDeviceEventTypeEXT value

To create a fence that will be signaled when an event occurs on a VkDisplayKHR object, call:

VkResult vkRegisterDisplayEventEXT(
    VkDevice                                    device,
    VkDisplayKHR                                display,
    const VkDisplayEventInfoEXT*                pDisplayEventInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkFence*                                    pFence);
  • device is a logical device associated with display

  • display is the display on which the event may occur.

  • pDisplayEventInfo is a pointer to an instance of the VkDisplayEventInfoEXT structure describing the event of interest to the application.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pFence points to a handle in which the resulting fence object is returned.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • display must be a valid VkDisplayKHR handle

  • pDisplayEventInfo must be a pointer to a valid VkDisplayEventInfoEXT structure

  • pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pFence must be a pointer to a VkFence handle

Return Codes
Success
  • VK_SUCCESS

The VkDisplayEventInfoEXT structure is defined as:

typedef struct VkDisplayEventInfoEXT {
    VkStructureType          sType;
    const void*              pNext;
    VkDisplayEventTypeEXT    displayEvent;
} VkDisplayEventInfoEXT;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • displayEvent specifies when the fence will be signaled. Possible values are:

    typedef enum VkDisplayEventTypeEXT {
        VK_DISPLAY_EVENT_TYPE_FIRST_PIXEL_OUT_EXT = 0,
    } VkDisplayEventTypeEXT;
    • VK_DISPLAY_EVENT_TYPE_FIRST_PIXEL_OUT_EXT occurs when the first pixel of the next display refresh cycle leaves the display engine for the display.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_DISPLAY_EVENT_INFO_EXT

  • pNext must be NULL

  • displayEvent must be a valid VkDisplayEventTypeEXT value

6.4. Semaphores

Semaphores are a synchronization primitive that can be used to insert a dependency between batches submitted to queues. Semaphores have two states - signaled and unsignaled. The state of a semaphore can be signaled after execution of a batch of commands is completed. A batch can wait for a semaphore to become signaled before it begins execution, and the semaphore is also unsignaled before the batch begins execution.

Semaphores are represented by VkSemaphore handles:

VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSemaphore)

To create a semaphore, call:

VkResult vkCreateSemaphore(
    VkDevice                                    device,
    const VkSemaphoreCreateInfo*                pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkSemaphore*                                pSemaphore);
  • device is the logical device that creates the semaphore.

  • pCreateInfo is a pointer to an instance of the VkSemaphoreCreateInfo structure which contains information about how the semaphore is to be created.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pSemaphore points to a handle in which the resulting semaphore object is returned.

When created, the semaphore is in the unsignaled state.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pCreateInfo must be a pointer to a valid VkSemaphoreCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pSemaphore must be a pointer to a VkSemaphore handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The VkSemaphoreCreateInfo structure is defined as:

typedef struct VkSemaphoreCreateInfo {
    VkStructureType           sType;
    const void*               pNext;
    VkSemaphoreCreateFlags    flags;
} VkSemaphoreCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is reserved for future use.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO

  • pNext must be NULL

  • flags must be 0

To create a semaphore object that can be exported to external handles, add the VkExportSemaphoreCreateInfoKHX structure to the pNext chain of the VkSemaphoreCreateInfo structure. The VkExportSemaphoreCreateInfoKHX structure is defined as:

typedef struct VkExportSemaphoreCreateInfoKHX {
    VkStructureType                          sType;
    const void*                              pNext;
    VkExternalSemaphoreHandleTypeFlagsKHX    handleTypes;
} VkExportSemaphoreCreateInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • handleTypes is a bitmask of VkExternalSemaphoreHandleTypeFlagBitsKHX specifying one or more semaphore handle types the application can export from the resulting allocation. The application can request multiple handle types for the same semaphore.

Valid Usage
Valid Usage (Implicit)

To specify additional attributes of NT handles exported from a semaphore, add the VkExportSemaphoreWin32HandleInfoKHX structure to the pNext chain of the VkSemaphoreCreateInfo structure. The VkExportSemaphoreWin32HandleInfoKHX structure is defined as:

typedef struct VkExportSemaphoreWin32HandleInfoKHX {
    VkStructureType               sType;
    const void*                   pNext;
    const SECURITY_ATTRIBUTES*    pAttributes;
    DWORD                         dwAccess;
    LPCWSTR                       name;
} VkExportSemaphoreWin32HandleInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • pAttributes is a pointer to a Windows SECURITY_ATTRIBUTES structure specifying security attributes of the handle.

  • dwAccess is a DWORD specifying access rights of the handle.

  • name is a NULL-terminated UNICODE string to associate with the underlying synchronization primitive referenced by NT handles exported from the created semaphore.

If this structure is not present, or if pAttributes is set to NULL, default security descriptor values will be used, and child processes created by the application will not inherit the handle, as described in the MSDN documentation for “Synchronization Object Security and Access Rights”1. Further, if the structure is not present, the access rights will be

DXGI_SHARED_RESOURCE_READ | DXGI_SHARED_RESOURCE_WRITE

for handles of the following types:

VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHX

And

GENERIC_ALL

for handles of the following types:

VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D12_FENCE_BIT_KHX

Valid Usage
  • If VkExportSemaphoreCreateInfoKHX::handleTypes does not include VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHX or VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D12_FENCE_BIT_KHX, VkExportSemaphoreWin32HandleInfoKHX must not be in the pNext chain of VkSemaphoreCreateInfo.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_EXPORT_SEMAPHORE_WIN32_HANDLE_INFO_KHX

  • pNext must be NULL

  • If pAttributes is not NULL, pAttributes must be a pointer to a valid SECURITY_ATTRIBUTES value

To export a Windows handle representing the state of a semaphore, call:

VkResult vkGetSemaphoreWin32HandleKHX(
    VkDevice                                    device,
    VkSemaphore                                 semaphore,
    VkExternalSemaphoreHandleTypeFlagBitsKHX    handleType,
    HANDLE*                                     pHandle);
  • device is the logical device that created semaphore.

  • semaphore is the semaphore from which state will be exported.

  • handleType is the type of handle requested.

  • pHandle will return the Windows handle representing the semaphore state.

The properties of the handle returned depend on the value of handleType. See VkExternalSemaphoreHandleTypeFlagBitsKHX for a description of the properties of the defined external semaphore handle types.

For handle types defined as NT handles, the handles returned by vkGetSemaphoreWin32HandleKHX are owned by the application. To avoid leaking resources, the application must release ownership of them using the CloseHandle system call when they are no longer needed.

Valid Usage
  • handleType must have been included in VkExportSemaphoreCreateInfoKHX::handleTypes when semaphore’s current state was created.

  • If handleType is defined as an NT handle, vkGetSemaphoreWin32HandleKHX must be called no more than once for each valid unique combination of semaphore and handleType.

  • semaphore must not currently have its state replaced by imported semaphore state as described below in Importing Semaphore State unless that imported semaphore state’s handle type was included in VkExternalSemaphorePropertiesKHX::exportFromImportedHandleTypes.

  • If handleType refers to a handle type with temporary import semantics, as defined below in Importing Semaphore State, there must be no queue waiting on semaphore.

  • If handleType refers to a handle type with temporary import semantics, semaphore must be signaled, or have an associated semaphore signal operation pending execution.

  • handleType must be defined as an NT handle or a global share handle.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • semaphore must be a valid VkSemaphore handle

  • handleType must be a valid VkExternalSemaphoreHandleTypeFlagBitsKHX value

  • pHandle must be a pointer to a HANDLE value

  • semaphore must have been created, allocated, or retrieved from device

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_TOO_MANY_OBJECTS

  • VK_ERROR_OUT_OF_HOST_MEMORY

To export a POSIX file descriptor representing the state of a semaphore, call:

VkResult vkGetSemaphoreFdKHX(
    VkDevice                                    device,
    VkSemaphore                                 semaphore,
    VkExternalSemaphoreHandleTypeFlagBitsKHX    handleType,
    int*                                        pFd);
  • device is the logical device that created semaphore.

  • semaphore is the semaphore from which state will be exported.

  • handleType is the type of handle requested.

  • pFd will return the file descriptor representing the semaphore state.

The properties of the file descriptor returned depend on the value of handleType. See VkExternalSemaphoreHandleTypeFlagBitsKHX for a description of the properties of the defined external semaphore handle types.

Each call to vkGetSemaphoreFdKHX must create a new file descriptor and transfer ownership of it to the application. To avoid leaking resources, the application must release ownership of the file descriptor using the close system call when it is no longer needed, or by importing Vulkan semaphore state from it.

Valid Usage
  • handleType must have been included in VkExportSemaphoreCreateInfoKHX::handleTypes when semaphore’s current state was created.

  • semaphore must not currently have its state replaced by imported semaphore state as described below in Importing Semaphore State unless that imported semaphore state’s handle type was included in VkExternalSemaphorePropertiesKHX::exportFromImportedHandleTypes.

  • If handleType refers to a handle type with temporary import semantics, as defined below in Importing Semaphore State, there must be no queue waiting on semaphore.

  • If handleType refers to a handle type with temporary import semantics, semaphore must be signaled, or have an associated semaphore signal operation pending execution.

  • handleType must be defined as a POSIX file descriptor handle.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • semaphore must be a valid VkSemaphore handle

  • handleType must be a valid VkExternalSemaphoreHandleTypeFlagBitsKHX value

  • pFd must be a pointer to a int value

  • semaphore must have been created, allocated, or retrieved from device

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_TOO_MANY_OBJECTS

  • VK_ERROR_OUT_OF_HOST_MEMORY

To destroy a semaphore, call:

void vkDestroySemaphore(
    VkDevice                                    device,
    VkSemaphore                                 semaphore,
    const VkAllocationCallbacks*                pAllocator);
  • device is the logical device that destroys the semaphore.

  • semaphore is the handle of the semaphore to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

Valid Usage
  • All submitted batches that refer to semaphore must have completed execution

  • If VkAllocationCallbacks were provided when semaphore was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when semaphore was created, pAllocator must be NULL

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • If semaphore is not VK_NULL_HANDLE, semaphore must be a valid VkSemaphore handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • If semaphore is a valid handle, it must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to semaphore must be externally synchronized

6.4.1. Semaphore Signaling

When a batch is submitted to a queue via a queue submission, and it includes semaphores to be signaled, it defines a memory dependency on the batch, and defines semaphore signal operations which set the semaphores to the signaled state.

The first synchronization scope includes every command submitted in the same batch. Semaphore signal operations that are defined by vkQueueSubmit additionally include all batches previously submitted to the same queue via vkQueueSubmit, including batches that are submitted in the same queue submission command, but at a lower index within the array of batches.

The second synchronization scope includes only the semaphore signal operation.

The first access scope includes all memory access performed by the device.

The second access scope is empty.

6.4.2. Semaphore Waiting & Unsignaling

When a batch is submitted to a queue via a queue submission, and it includes semaphores to be waited on, it defines a memory dependency between prior semaphore signal operations and the batch, and defines semaphore unsignal operations which set the semaphores to the unsignaled state.

The first synchronization scope includes all semaphore signal operations that operate on semaphores waited on in the same batch, and that happen-before the wait completes.

The second synchronization scope includes every command submitted in the same batch. In the case of vkQueueSubmit, the second synchronization scope is limited to operations on the pipeline stages determined by the destination stage mask specified by the corresponding element of pWaitDstStageMask. Also, in the case of vkQueueSubmit, the second synchronization scope additionally includes all batches subsequently submitted to the same queue via vkQueueSubmit, including batches that are submitted in the same queue submission command, but at a higher index within the array of batches.

The first access scope is empty.

The second access scope includes all memory access performed by the device.

The semaphore unsignal operation happens-after the first set of operations in the execution dependency, and happens-before the second set of operations in the execution dependency.

Note

Unlike fences or events, the act of waiting for a semaphore also unsignals that semaphore. If two operations are separately specified to wait for the same semaphore, and there are no other execution dependencies between those operations, behaviour is undefined. An execution dependency must be present that guarantees that the semaphore unsignal operation for the first of those waits, happens-before the semaphore is signalled again, and before the second unsignal operation. Semaphore waits and signals should thus occur in discrete 1:1 pairs.

Note

A common scenario for using pWaitDstStageMask with values other than VK_PIPELINE_STAGE_ALL_COMMANDS_BIT is when synchronizing a window system presentation operation against subsequent command buffers which render the next frame. In this case, a presentation image must not be overwritten until the presentation operation completes, but other pipeline stages can execute without waiting. A mask of VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT prevents subsequent color attachment writes from executing until the semaphore signals. Some implementations may be able to execute transfer operations and/or vertex processing work before the semaphore is signaled.

If an image layout transition needs to be performed on a presentable image before it is used in a framebuffer, that can be performed as the first operation submitted to the queue after acquiring the image, and should not prevent other work from overlapping with the presentation operation. For example, a VkImageMemoryBarrier could use:

  • srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT

  • srcAccessMask = 0

  • dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT

  • dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT.

  • oldLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR

  • newLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL

Alternatively, oldLayout can be VK_IMAGE_LAYOUT_UNDEFINED, if the image’s contents need not be preserved.

This barrier accomplishes a dependency chain between previous presentation operations and subsequent color attachment output operations, with the layout transition performed in between, and does not introduce a dependency between previous work and any vertex processing stages. More precisely, the semaphore signals after the presentation operation completes, the semaphore wait stalls the VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT stage, and there is a dependency from that same stage to itself with the layout transition performed in between.

6.4.3. Semaphore State Requirements For Wait Operations

Before waiting on a semaphore, the application must ensure the semaphore is in a valid state for a wait operation. Specifically, when a semaphore wait and unsignal operation is submitted to a queue:

  • The semaphore must be signaled, or have an associated semaphore signal operation that is pending execution.

  • There must be no other queue waiting on the same semaphore when the operation executes.

Generally, the implementation behavior will be undefined when applications fail to ensure these requirements are met. However, to avoid requiring Vulkan instances that import semaphore state to trust the source of that state, the side effects of violating the requirements are better defined in certain cases. When the semaphore that violates the requirements is using imported state at the time of the violation, the implementation must ensure side effects are limited to one or more of the following:

  • Returning the error code VK_ERROR_INITIALIZATION_FAILED from the command which resulted in the violation.

  • Allowing the logical device on which the violation occured to enter an undefined state immediately or in the future and returning the error code VK_ERROR_DEVICE_LOST from any subsequent command, including the one causing the violation, after the device enters such a state.

  • Continuing execution of the violating command or operation as if the semaphore wait completed successfully after an implementation-dependent timeout. In this case, the state of the semaphore becomes undefined. The semaphore must be destroyed or have its state replaced by an import operation to again meet valid usage requirements.

6.4.4. Importing Semaphore State

Applications may import semaphore state into an existing semaphore using an external semaphore handle. Doing so releases any existing state as if vkDestroySemaphore had been called on the target semaphore. As such, similar usage restrictions to those applied to vkDestroySemaphore are applied to any command that imports semaphore state. The effects of the import operation will be either temporary or permanent, depending on the type of handle used to perform the import. If the import is temporary, the implementation must restore the semaphore to its prior permanent state when executing the next semaphore signal operation. If the import is permanent, the signaled status of all semaphores sharing the same semaphore state must be equivalent. Semaphore signaling and waiting operations performed on any semaphore in the set of semaphores sharing semaphore state must behave the same as if the set were a single semaphore. Passing a semaphore to vkAcquireNextImageKHR is equivalent to temporarily importing semaphore state to that semaphore. When a semaphore is using imported state, its VkExportSemaphoreCreateInfoKHX::handleTypes value is that specified when creating the semaphore from which the state was exported, rather than that specified when creating the semaphore.

Note

Because the exportable handle types of an imported semaphore correspond to its current imported state, and vkAcquireNextImageKHR behaves the same as a temporary import operation for which the source semaphore is opaque to the application, applications have no way of determining whether any external handle types can be exported from a semaphore in this state. Therefore, applications must not attempt to export external handles from semaphores using temporarily imported state from vkAcquireNextImageKHR.

When importing semaphore state, it is the responsibility of the application to ensure the external handles meet all valid usage requirements. However, implementations must perform sufficient validation of external handles to ensure that the operation results in valid semaphore state which will not cause program termination, device loss, queue stalls, or corruption of other resources when used as allowed according to its import parameters, and excepting those side effects allowed for violations of the valid semaphore state for wait operations rules. If the external handle provided does not meet these requirements, the implementation must fail the semaphore state import operation with the error code VK_ERROR_INVALID_EXTERNAL_HANDLE_KHX.

To import semaphore state from a Windows handle, call:

VkResult vkImportSemaphoreWin32HandleKHX(
    VkDevice                                    device,
    const VkImportSemaphoreWin32HandleInfoKHX*  pImportSemaphoreWin32HandleInfo);
  • device is the logical device that created the semaphore.

  • pImportSemaphoreWin32HandleInfo points to a VkImportSemaphoreWin32HandleInfoKHX structure specifying the semaphore and import parameters.

Importing semaphore state from Windows handles does not transfer ownership of the handle to the Vulkan implementation. For handle types defined as NT handles, the application must release ownership using the CloseHandle system call when the handle is no longer needed.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pImportSemaphoreWin32HandleInfo must be a pointer to a valid VkImportSemaphoreWin32HandleInfoKHX structure

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_INVALID_EXTERNAL_HANDLE_KHX

The VkImportSemaphoreWin32HandleInfoKHX structure is defined as:

typedef struct VkImportSemaphoreWin32HandleInfoKHX {
    VkStructureType                          sType;
    const void*                              pNext;
    VkSemaphore                              semaphore;
    VkExternalSemaphoreHandleTypeFlagsKHX    handleType;
    HANDLE                                   handle;
} VkImportSemaphoreWin32HandleInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • semaphore is the semaphore into which the state will be imported.

  • handleType specifies the type of handle.

  • handle is the external handle to import.

The handle types supported by handleType are:

Table 6. Handle Type Permanence for VkImportSemaphoreWin32HandleInfoKHX
Handle Type Permanence

VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHX

Permanent

VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_WIN32_KMT_BIT_KHX

Permanent

VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_D3D12_FENCE_BIT_KHX

Permanent

Valid Usage
Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_IMPORT_SEMAPHORE_WIN32_HANDLE_INFO_KHX

  • pNext must be NULL

  • semaphore must be a valid VkSemaphore handle

  • handleType must be a valid combination of VkExternalSemaphoreHandleTypeFlagBitsKHX values

  • handleType must not be 0

To import semaphore state from a POSIX file descriptor, call:

VkResult vkImportSemaphoreFdKHX(
    VkDevice                                    device,
    const VkImportSemaphoreFdInfoKHX*           pImportSemaphoreFdInfo);
  • device is the logical device that created the semaphore.

  • pImportSemaphoreFdInfo points to a VkImportSemaphoreFdInfoKHX structure specifying the semaphore and import parameters.

Importing semaphore state from a file descriptor transfers ownership of the file descriptor from the application to the Vulkan implementation. The application must not perform any operations on the file descriptor after a successful import.

Valid Usage
  • semaphore must not be associated with any queue command that has not yet completed execution on that queue

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pImportSemaphoreFdInfo must be a pointer to a valid VkImportSemaphoreFdInfoKHX structure

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_INVALID_EXTERNAL_HANDLE_KHX

The VkImportSemaphoreFdInfoKHX structure is defined as:

typedef struct VkImportSemaphoreFdInfoKHX {
    VkStructureType                             sType;
    const void*                                 pNext;
    VkSemaphore                                 semaphore;
    VkExternalSemaphoreHandleTypeFlagBitsKHX    handleType;
    int                                         fd;
} VkImportSemaphoreFdInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • semaphore is the semaphore into which the state will be imported.

  • handleType specifies the type of fd.

  • fd is the external handle to import.

The handle types supported by handleType are:

Table 7. Handle Type Permanence for VkImportSemaphoreFdInfoKHX
Handle Type Permanence

VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD_BIT_KHX

Permanent

VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_FENCE_FD_BIT_KHX

Temporary

Valid Usage
Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_IMPORT_SEMAPHORE_FD_INFO_KHX

  • pNext must be NULL

  • semaphore must be a valid VkSemaphore handle

  • handleType must be a valid VkExternalSemaphoreHandleTypeFlagBitsKHX value

6.5. Events

Events are a synchronization primitive that can be used to insert a fine-grained dependency between commands submitted to the same queue, or between the host and a queue. Events have two states - signaled and unsignaled. An application can signal an event, or unsignal it, on either the host or the device. A device can wait for an event to become signaled before executing further operations. No command exists to wait for an event to become signaled on the host, but the current state of an event can be queried.

Events are represented by VkEvent handles:

VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkEvent)

To create an event, call:

VkResult vkCreateEvent(
    VkDevice                                    device,
    const VkEventCreateInfo*                    pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkEvent*                                    pEvent);
  • device is the logical device that creates the event.

  • pCreateInfo is a pointer to an instance of the VkEventCreateInfo structure which contains information about how the event is to be created.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pEvent points to a handle in which the resulting event object is returned.

When created, the event object is in the unsignaled state.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pCreateInfo must be a pointer to a valid VkEventCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pEvent must be a pointer to a VkEvent handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The VkEventCreateInfo structure is defined as:

typedef struct VkEventCreateInfo {
    VkStructureType       sType;
    const void*           pNext;
    VkEventCreateFlags    flags;
} VkEventCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is reserved for future use.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_EVENT_CREATE_INFO

  • pNext must be NULL

  • flags must be 0

To destroy an event, call:

void vkDestroyEvent(
    VkDevice                                    device,
    VkEvent                                     event,
    const VkAllocationCallbacks*                pAllocator);
  • device is the logical device that destroys the event.

  • event is the handle of the event to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

Valid Usage
  • All submitted commands that refer to event must have completed execution

  • If VkAllocationCallbacks were provided when event was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when event was created, pAllocator must be NULL

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • If event is not VK_NULL_HANDLE, event must be a valid VkEvent handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • If event is a valid handle, it must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to event must be externally synchronized

To query the state of an event from the host, call:

VkResult vkGetEventStatus(
    VkDevice                                    device,
    VkEvent                                     event);
  • device is the logical device that owns the event.

  • event is the handle of the event to query.

Upon success, vkGetEventStatus returns the state of the event object with the following return codes:

Table 8. Event Object Status Codes
Status Meaning

VK_EVENT_SET

The event specified by event is signaled.

VK_EVENT_RESET

The event specified by event is unsignaled.

If a vkCmdSetEvent or vkCmdResetEvent command is in a command buffer that is in the pending state, then the value returned by this command may immediately be out of date.

The state of an event can be updated by the host. The state of the event is immediately changed, and subsequent calls to vkGetEventStatus will return the new state. If an event is already in the requested state, then updating it to the same state has no effect.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • event must be a valid VkEvent handle

  • event must have been created, allocated, or retrieved from device

Return Codes
Success
  • VK_EVENT_SET

  • VK_EVENT_RESET

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_DEVICE_LOST

To set the state of an event to signaled from the host, call:

VkResult vkSetEvent(
    VkDevice                                    device,
    VkEvent                                     event);
  • device is the logical device that owns the event.

  • event is the event to set.

When vkSetEvent is executed on the host, it defines an event signal operation which sets the event to the signaled state.

If event is already in the signaled state when vkSetEvent is executed, then vkSetEvent has no effect, and no event signal operation occurs.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • event must be a valid VkEvent handle

  • event must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to event must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

To set the state of an event to unsignaled from the host, call:

VkResult vkResetEvent(
    VkDevice                                    device,
    VkEvent                                     event);
  • device is the logical device that owns the event.

  • event is the event to reset.

When vkResetEvent is executed on the host, it defines an event unsignal operation which resets the event to the unsignaled state.

If event is already in the unsignaled state when vkResetEvent is executed, then vkResetEvent has no effect, and no event unsignal operation occurs.

Valid Usage
  • event must not be waited on by a vkCmdWaitEvents command that is currently executing

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • event must be a valid VkEvent handle

  • event must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to event must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The state of an event can also be updated on the device by commands inserted in command buffers.

To set the state of an event to signaled from a device, call:

void vkCmdSetEvent(
    VkCommandBuffer                             commandBuffer,
    VkEvent                                     event,
    VkPipelineStageFlags                        stageMask);
  • commandBuffer is the command buffer into which the command is recorded.

  • event is the event that will be signaled.

  • stageMask specifies the source stage mask used to determine when the event is signaled.

When vkCmdSetEvent is submitted to a queue, it defines an execution dependency on commands that were submitted before it, and defines an event signal operation which sets the event to the signaled state.

The first synchronization scope includes every command previously submitted to the same queue, including those in the same command buffer and batch. The synchronization scope is limited to operations on the pipeline stages determined by the source stage mask specified by stageMask.

The second synchronization scope includes only the event signal operation.

If event is already in the signaled state when vkCmdSetEvent is executed on the device, then vkCmdSetEvent has no effect, no event signal operation occurs, and no execution dependency is generated.

Valid Usage
  • stageMask must not include VK_PIPELINE_STAGE_HOST_BIT

  • If the geometry shaders feature is not enabled, stageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the tessellation shaders feature is not enabled, stageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • commandBuffer’s current device mask must include exactly one physical device.

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • event must be a valid VkEvent handle

  • stageMask must be a valid combination of VkPipelineStageFlagBits values

  • stageMask must not be 0

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support graphics, or compute operations

  • This command must only be called outside of a render pass instance

  • Both of commandBuffer, and event must have been created, allocated, or retrieved from the same VkDevice

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary
Secondary

Outside

Graphics
compute

To set the state of an event to unsignaled from a device, call:

void vkCmdResetEvent(
    VkCommandBuffer                             commandBuffer,
    VkEvent                                     event,
    VkPipelineStageFlags                        stageMask);
  • commandBuffer is the command buffer into which the command is recorded.

  • event is the event that will be unsignaled.

  • stageMask specifies the source stage mask used to determine when the event is unsignaled.

When vkCmdResetEvent is submitted to a queue, it defines an execution dependency on commands that were submitted before it, and defines an event unsignal operation which resets the event to the unsignaled state.

The first synchronization scope includes every command previously submitted to the same queue, including those in the same command buffer and batch. The synchronization scope is limited to operations on the pipeline stages determined by the source stage mask specified by stageMask.

The second synchronization scope includes only the event unsignal operation.

If event is already in the unsignaled state when vkCmdResetEvent is executed on the device, then vkCmdResetEvent has no effect, no event unsignal operation occurs, and no execution dependency is generated.

Valid Usage
  • stageMask must not include VK_PIPELINE_STAGE_HOST_BIT

  • If the geometry shaders feature is not enabled, stageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the tessellation shaders feature is not enabled, stageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • When this command executes, event must not be waited on by a vkCmdWaitEvents command that is currently executing

  • commandBuffer’s current device mask must include exactly one physical device.

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • event must be a valid VkEvent handle

  • stageMask must be a valid combination of VkPipelineStageFlagBits values

  • stageMask must not be 0

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support graphics, or compute operations

  • This command must only be called outside of a render pass instance

  • Both of commandBuffer, and event must have been created, allocated, or retrieved from the same VkDevice

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary
Secondary

Outside

Graphics
compute

To wait for one or more events to enter the signaled state on a device, call:

void vkCmdWaitEvents(
    VkCommandBuffer                             commandBuffer,
    uint32_t                                    eventCount,
    const VkEvent*                              pEvents,
    VkPipelineStageFlags                        srcStageMask,
    VkPipelineStageFlags                        dstStageMask,
    uint32_t                                    memoryBarrierCount,
    const VkMemoryBarrier*                      pMemoryBarriers,
    uint32_t                                    bufferMemoryBarrierCount,
    const VkBufferMemoryBarrier*                pBufferMemoryBarriers,
    uint32_t                                    imageMemoryBarrierCount,
    const VkImageMemoryBarrier*                 pImageMemoryBarriers);
  • commandBuffer is the command buffer into which the command is recorded.

  • eventCount is the length of the pEvents array.

  • pEvents is an array of event object handles to wait on.

  • srcStageMask is the source stage mask

  • dstStageMask is the destination stage mask.

  • memoryBarrierCount is the length of the pMemoryBarriers array.

  • pMemoryBarriers is a pointer to an array of VkMemoryBarrier structures.

  • bufferMemoryBarrierCount is the length of the pBufferMemoryBarriers array.

  • pBufferMemoryBarriers is a pointer to an array of VkBufferMemoryBarrier structures.

  • imageMemoryBarrierCount is the length of the pImageMemoryBarriers array.

  • pImageMemoryBarriers is a pointer to an array of VkImageMemoryBarrier structures.

When vkCmdWaitEvents is submitted to a queue, it defines a memory dependency between prior event signal operations, and subsequent commands.

The first synchronization scope only includes event signal operations that operate on members of pEvents, and the operations that happened-before the event signal operations. Event signal operations performed by vkCmdSetEvent that were previously submitted to the same queue are included in the first synchronization scope, if the logically latest pipeline stage in their stageMask parameter is logically earlier than or equal to the logically latest pipeline stage in srcStageMask. Event signal operations performed by vkSetEvent are only included in the first synchronization scope if VK_PIPELINE_STAGE_HOST_BIT is included in srcStageMask.

The second synchronization scope includes commands subsequently submitted to the same queue, including those in the same command buffer and batch. The second synchronization scope is limited to operations on the pipeline stages determined by the destination stage mask specified by dstStageMask.

The first access scope is limited to access in the pipeline stages determined by the source stage mask specified by srcStageMask. Within that, the first access scope only includes the first access scopes defined by elements of the pMemoryBarriers, pBufferMemoryBarriers and pImageMemoryBarriers arrays, which each define a set of memory barriers. If no memory barriers are specified, then the first access scope includes no accesses.

The second access scope is limited to access in the pipeline stages determined by the destination stage mask specified by dstStageMask. Within that, the second access scope only includes the second access scopes defined by elements of the pMemoryBarriers, pBufferMemoryBarriers and pImageMemoryBarriers arrays, which each define a set of memory barriers. If no memory barriers are specified, then the second access scope includes no accesses.

Note

vkCmdWaitEvents is used with vkCmdSetEvent to define a memory dependency between two sets of action commands, roughly in the same way as pipeline barriers, but split into two commands such that work between the two may execute unhindered.

Note

Applications should be careful to avoid race conditions when using events. There is no direct ordering guarantee between a vkCmdResetEvent command and a vkCmdWaitEvents command submitted after it, so some other execution dependency must be included between these commands (e.g. a semaphore).

Valid Usage
  • srcStageMask must be the bitwise OR of the stageMask parameter used in previous calls to vkCmdSetEvent with any of the members of pEvents and VK_PIPELINE_STAGE_HOST_BIT if any of the members of pEvents was set using vkSetEvent

  • If the geometry shaders feature is not enabled, srcStageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the geometry shaders feature is not enabled, dstStageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the tessellation shaders feature is not enabled, srcStageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • If the tessellation shaders feature is not enabled, dstStageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • If pEvents includes one or more events that will be signaled by vkSetEvent after commandBuffer has been submitted to a queue, then vkCmdWaitEvents must not be called inside a render pass instance

  • Any pipeline stage included in srcStageMask or dstStageMask must be supported by the capabilities of the queue family specified by the queueFamilyIndex member of the VkCommandPoolCreateInfo structure that was used to create the VkCommandPool that commandBuffer was allocated from, as specified in the table of supported pipeline stages.

  • Any given element of pMemoryBarriers, pBufferMemoryBarriers or pImageMemoryBarriers must not have any access flag included in its srcAccessMask member if that bit is not supported by any of the pipeline stages in srcStageMask, as specified in the table of supported access types.

  • Any given element of pMemoryBarriers, pBufferMemoryBarriers or pImageMemoryBarriers must not have any access flag included in its dstAccessMask member if that bit is not supported by any of the pipeline stages in dstStageMask, as specified in the table of supported access types.

  • commandBuffer’s current device mask must include exactly one physical device.

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • pEvents must be a pointer to an array of eventCount valid VkEvent handles

  • srcStageMask must be a valid combination of VkPipelineStageFlagBits values

  • srcStageMask must not be 0

  • dstStageMask must be a valid combination of VkPipelineStageFlagBits values

  • dstStageMask must not be 0

  • If memoryBarrierCount is not 0, pMemoryBarriers must be a pointer to an array of memoryBarrierCount valid VkMemoryBarrier structures

  • If bufferMemoryBarrierCount is not 0, pBufferMemoryBarriers must be a pointer to an array of bufferMemoryBarrierCount valid VkBufferMemoryBarrier structures

  • If imageMemoryBarrierCount is not 0, pImageMemoryBarriers must be a pointer to an array of imageMemoryBarrierCount valid VkImageMemoryBarrier structures

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support graphics, or compute operations

  • eventCount must be greater than 0

  • Both of commandBuffer, and the elements of pEvents must have been created, allocated, or retrieved from the same VkDevice

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary
Secondary

Both

Graphics
compute

6.6. Pipeline Barriers

vkCmdPipelineBarrier is a synchronization command that inserts a dependency between commands submitted to the same queue, or between commands in the same subpass.

To record a pipeline barrier, call:

void vkCmdPipelineBarrier(
    VkCommandBuffer                             commandBuffer,
    VkPipelineStageFlags                        srcStageMask,
    VkPipelineStageFlags                        dstStageMask,
    VkDependencyFlags                           dependencyFlags,
    uint32_t                                    memoryBarrierCount,
    const VkMemoryBarrier*                      pMemoryBarriers,
    uint32_t                                    bufferMemoryBarrierCount,
    const VkBufferMemoryBarrier*                pBufferMemoryBarriers,
    uint32_t                                    imageMemoryBarrierCount,
    const VkImageMemoryBarrier*                 pImageMemoryBarriers);
  • commandBuffer is the command buffer into which the command is recorded.

  • srcStageMask defines a source stage mask.

  • dstStageMask defines a destination stage mask.

  • dependencyFlags is a bitmask of VkDependencyFlagBits. The bits that can be included in dependencyFlags are:

    typedef enum VkDependencyFlagBits {
        VK_DEPENDENCY_BY_REGION_BIT = 0x00000001,
        VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX = 0x00000002,
        VK_DEPENDENCY_DEVICE_GROUP_BIT_KHX = 0x00000004,
    } VkDependencyFlagBits;
  • memoryBarrierCount is the length of the pMemoryBarriers array.

  • pMemoryBarriers is a pointer to an array of VkMemoryBarrier structures.

  • bufferMemoryBarrierCount is the length of the pBufferMemoryBarriers array.

  • pBufferMemoryBarriers is a pointer to an array of VkBufferMemoryBarrier structures.

  • imageMemoryBarrierCount is the length of the pImageMemoryBarriers array.

  • pImageMemoryBarriers is a pointer to an array of VkImageMemoryBarrier structures.

When vkCmdPipelineBarrier is submitted to a queue, it defines a memory dependency between commands that were submitted before it, and those submitted after it.

If vkCmdPipelineBarrier was recorded outside a render pass instance, the first synchronization scope includes every command submitted to the same queue before it, including those in the same command buffer and batch. If vkCmdPipelineBarrier was recorded inside a render pass instance, the first synchronization scope includes only commands submitted before it within the same subpass. In either case, the first synchronization scope is limited to operations on the pipeline stages determined by the source stage mask specified by srcStageMask.

If vkCmdPipelineBarrier was recorded outside a render pass instance, the second synchronization scope includes every command submitted to the same queue after it, including those in the same command buffer and batch. If vkCmdPipelineBarrier was recorded inside a render pass instance, the second synchronization scope includes only commands submitted after it within the same subpass. In either case, the second synchronization scope is limited to operations on the pipeline stages determined by the destination stage mask specified by dstStageMask.

The first access scope is limited to access in the pipeline stages determined by the source stage mask specified by srcStageMask. Within that, the first access scope only includes the first access scopes defined by elements of the pMemoryBarriers, pBufferMemoryBarriers and pImageMemoryBarriers arrays, which each define a set of memory barriers. If no memory barriers are specified, then the first access scope includes no accesses.

The second access scope is limited to access in the pipeline stages determined by the destination stage mask specified by dstStageMask. Within that, the second access scope only includes the second access scopes defined by elements of the pMemoryBarriers, pBufferMemoryBarriers and pImageMemoryBarriers arrays, which each define a set of memory barriers. If no memory barriers are specified, then the second access scope includes no accesses.

If dependencyFlags includes VK_DEPENDENCY_BY_REGION_BIT, then any dependency between framebuffer-space pipeline stages is framebuffer-local - otherwise it is framebuffer-global.

Valid Usage
  • If the geometry shaders feature is not enabled, srcStageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the geometry shaders feature is not enabled, dstStageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the tessellation shaders feature is not enabled, srcStageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • If the tessellation shaders feature is not enabled, dstStageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • If vkCmdPipelineBarrier is called within a render pass instance, the render pass must have been created with a VkSubpassDependency instance in pDependencies that expresses a dependency from the current subpass to itself.

  • If vkCmdPipelineBarrier is called within a render pass instance, srcStageMask must contain a subset of the bit values in the srcStageMask member of that instance of VkSubpassDependency

  • If vkCmdPipelineBarrier is called within a render pass instance, dstStageMask must contain a subset of the bit values in the dstStageMask member of that instance of VkSubpassDependency

  • If vkCmdPipelineBarrier is called within a render pass instance, the srcAccessMask of any element of pMemoryBarriers or pImageMemoryBarriers must contain a subset of the bit values the srcAccessMask member of that instance of VkSubpassDependency

  • If vkCmdPipelineBarrier is called within a render pass instance, the dstAccessMask of any element of pMemoryBarriers or pImageMemoryBarriers must contain a subset of the bit values the dstAccessMask member of that instance of VkSubpassDependency

  • If vkCmdPipelineBarrier is called within a render pass instance, dependencyFlags must be equal to the dependencyFlags member of that instance of VkSubpassDependency

  • If vkCmdPipelineBarrier is called within a render pass instance, bufferMemoryBarrierCount must be 0

  • If vkCmdPipelineBarrier is called within a render pass instance, the image member of any element of pImageMemoryBarriers must be equal to one of the elements of pAttachments that the current framebuffer was created with, that is also referred to by one of the elements of the pColorAttachments, pResolveAttachments or pDepthStencilAttachment members of the VkSubpassDescription instance that the current subpass was created with

  • If vkCmdPipelineBarrier is called within a render pass instance, the oldLayout and newLayout members of any element of pImageMemoryBarriers must be equal to the layout member of an element of the pColorAttachments, pResolveAttachments or pDepthStencilAttachment members of the VkSubpassDescription instance that the current subpass was created with, that refers to the same image

  • If vkCmdPipelineBarrier is called within a render pass instance, the oldLayout and newLayout members of an element of pImageMemoryBarriers must be equal

  • If vkCmdPipelineBarrier is called within a render pass instance, the srcQueueFamilyIndex and dstQueueFamilyIndex members of any element of pImageMemoryBarriers must be VK_QUEUE_FAMILY_IGNORED

  • Any pipeline stage included in srcStageMask or dstStageMask must be supported by the capabilities of the queue family specified by the queueFamilyIndex member of the VkCommandPoolCreateInfo structure that was used to create the VkCommandPool that commandBuffer was allocated from, as specified in the table of supported pipeline stages.

  • Any given element of pMemoryBarriers, pBufferMemoryBarriers or pImageMemoryBarriers must not have any access flag included in its srcAccessMask member if that bit is not supported by any of the pipeline stages in srcStageMask, as specified in the table of supported access types.

  • Any given element of pMemoryBarriers, pBufferMemoryBarriers or pImageMemoryBarriers must not have any access flag included in its dstAccessMask member if that bit is not supported by any of the pipeline stages in dstStageMask, as specified in the table of supported access types.

  • If vkCmdPipelineBarrier is called outside of a render pass instance, dependencyFlags must not include VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • srcStageMask must be a valid combination of VkPipelineStageFlagBits values

  • srcStageMask must not be 0

  • dstStageMask must be a valid combination of VkPipelineStageFlagBits values

  • dstStageMask must not be 0

  • dependencyFlags must be a valid combination of VkDependencyFlagBits values

  • If memoryBarrierCount is not 0, pMemoryBarriers must be a pointer to an array of memoryBarrierCount valid VkMemoryBarrier structures

  • If bufferMemoryBarrierCount is not 0, pBufferMemoryBarriers must be a pointer to an array of bufferMemoryBarrierCount valid VkBufferMemoryBarrier structures

  • If imageMemoryBarrierCount is not 0, pImageMemoryBarriers must be a pointer to an array of imageMemoryBarrierCount valid VkImageMemoryBarrier structures

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support transfer, graphics, or compute operations

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary
Secondary

Both

Transfer
graphics
compute

6.6.1. Subpass Self-dependency

If vkCmdPipelineBarrier is called inside a render pass instance, the following restrictions apply. For a given subpass to allow a pipeline barrier, the render pass must declare a self-dependency from that subpass to itself. That is, there must exist a VkSubpassDependency in the subpass dependency list for the render pass with srcSubpass and dstSubpass equal to that subpass index. More than one self-dependency can be declared for each subpass. Self-dependencies must only include pipeline stage bits that are graphics stages. Self-dependencies must not have any earlier pipeline stages depend on any later pipeline stages (according to the order of graphics pipeline stages), unless all of the stages are framebuffer-space stages. If the source and destination stage masks both include framebuffer-space stages, then dependencyFlags must include VK_DEPENDENCY_BY_REGION_BIT. If the subpass has more than one view, then dependencyFlags must include VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX.

A vkCmdPipelineBarrier command inside a render pass instance must be a subset of one of the self-dependencies of the subpass it is used in, meaning that the stage masks and access masks must each include only a subset of the bits of the corresponding mask in that self-dependency. If the self-dependency has VK_DEPENDENCY_BY_REGION_BIT or VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX set, then so must the pipeline barrier. Pipeline barriers within a render pass instance can only be types VkMemoryBarrier or VkImageMemoryBarrier. If a VkImageMemoryBarrier is used, the image and image subresource range specified in the barrier must be a subset of one of the image views used by the framebuffer in the current subpass. Additionally, oldLayout must be equal to newLayout, and both the srcQueueFamilyIndex and dstQueueFamilyIndex must be VK_QUEUE_FAMILY_IGNORED.

6.7. Memory Barriers

Memory barriers are used to explicitly control access to buffer and image subresource ranges. Memory barriers are used to transfer ownership between queue families, change image layouts, and define availability and visibility operations. They explicitly define the access types and buffer and image subresource ranges that are included in the access scopes of a memory dependency that is created by a synchronization command that includes them.

6.7.1. Global Memory Barriers

Global memory barriers apply to memory accesses involving all memory objects that exist at the time of its execution.

The VkMemoryBarrier structure is defined as:

typedef struct VkMemoryBarrier {
    VkStructureType    sType;
    const void*        pNext;
    VkAccessFlags      srcAccessMask;
    VkAccessFlags      dstAccessMask;
} VkMemoryBarrier;

The first access scope is limited to access types in the source access mask specified by srcAccessMask.

The second access scope is limited to access types in the destination access mask specified by dstAccessMask.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_MEMORY_BARRIER

  • pNext must be NULL

  • srcAccessMask must be a valid combination of VkAccessFlagBits values

  • dstAccessMask must be a valid combination of VkAccessFlagBits values

6.7.2. Buffer Memory Barriers

Buffer memory barriers only apply to memory accesses involving a specific buffer range. That is, a memory dependency formed from an buffer memory barrier is scoped to access via the specified buffer range. Buffer memory barriers can also be used to define a queue family ownership transfer for the specified buffer range.

The VkBufferMemoryBarrier structure is defined as:

typedef struct VkBufferMemoryBarrier {
    VkStructureType    sType;
    const void*        pNext;
    VkAccessFlags      srcAccessMask;
    VkAccessFlags      dstAccessMask;
    uint32_t           srcQueueFamilyIndex;
    uint32_t           dstQueueFamilyIndex;
    VkBuffer           buffer;
    VkDeviceSize       offset;
    VkDeviceSize       size;
} VkBufferMemoryBarrier;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • srcAccessMask defines a source access mask.

  • dstAccessMask defines a destination access mask.

  • srcQueueFamilyIndex is the source queue family for a queue family ownership transfer.

  • dstQueueFamilyIndex is the destination queue family for a queue family ownership transfer.

  • buffer is a handle to the buffer whose backing memory is affected by the barrier.

  • offset is an offset in bytes into the backing memory for buffer; this is relative to the base offset as bound to the buffer (see vkBindBufferMemory).

  • size is a size in bytes of the affected area of backing memory for buffer, or VK_WHOLE_SIZE to use the range from offset to the end of the buffer.

The first access scope is limited to access to memory through the specified buffer range, via access types in the source access mask specified by srcAccessMask. If srcAccessMask includes VK_ACCESS_HOST_WRITE_BIT, memory writes performed by that access type are also made visible, as that access type is not performed through a resource.

The second access scope is limited to access to memory through the specified buffer range, via access types in the destination access mask specified by dstAccessMask. If dstAccessMask includes VK_ACCESS_HOST_WRITE_BIT or VK_ACCESS_HOST_READ_BIT, available memory writes are also made visible to accesses of those types, as those access types are not performed through a resource.

If srcQueueFamilyIndex is not equal to dstQueueFamilyIndex, and srcQueueFamilyIndex is equal to the current queue family, then the memory barrier defines a queue family release operation for the specified buffer range, and the second access scope includes no access, as if dstAccessMask was 0.

If dstQueueFamilyIndex is not equal to srcQueueFamilyIndex, and dstQueueFamilyIndex is equal to the current queue family, then the memory barrier defines a queue family acquire operation for the specified buffer range, and the first access scope includes no access, as if srcAccessMask was 0.

Valid Usage
  • offset must be less than the size of buffer

  • If size is not equal to VK_WHOLE_SIZE, size must be greater than 0

  • If size is not equal to VK_WHOLE_SIZE, size must be less than or equal to than the size of buffer minus offset

  • If buffer was created with a sharing mode of VK_SHARING_MODE_CONCURRENT, at least one of srcQueueFamilyIndex and dstQueueFamilyIndex must be VK_QUEUE_FAMILY_IGNORED

  • If buffer was created with a sharing mode of VK_SHARING_MODE_CONCURRENT, and one of srcQueueFamilyIndex and dstQueueFamilyIndex is VK_QUEUE_FAMILY_IGNORED, the other must be VK_QUEUE_FAMILY_IGNORED or VK_QUEUE_FAMILY_EXTERNAL_KHX

  • If buffer was created with a sharing mode of VK_SHARING_MODE_EXCLUSIVE and srcQueueFamilyIndex is VK_QUEUE_FAMILY_IGNORED, dstQueueFamilyIndex must also be VK_QUEUE_FAMILY_IGNORED

  • If buffer was created with a sharing mode of VK_SHARING_MODE_EXCLUSIVE and srcQueueFamilyIndex is not VK_QUEUE_FAMILY_IGNORED, it must be a valid queue family or VK_QUEUE_FAMILY_EXTERNAL_KHX (see Queue Family Properties)

  • If buffer was created with a sharing mode of VK_SHARING_MODE_EXCLUSIVE and dstQueueFamilyIndex is not VK_QUEUE_FAMILY_IGNORED, it must be a valid queue family or VK_QUEUE_FAMILY_EXTERNAL_KHX (see Queue Family Properties)

  • If buffer was created with a sharing mode of VK_SHARING_MODE_EXCLUSIVE, and srcQueueFamilyIndex and dstQueueFamilyIndex are valid queue families, at least one of them must be the same as the family of the queue that will execute this barrier

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER

  • pNext must be NULL

  • srcAccessMask must be a valid combination of VkAccessFlagBits values

  • dstAccessMask must be a valid combination of VkAccessFlagBits values

  • buffer must be a valid VkBuffer handle

6.7.3. Image Memory Barriers

Image memory barriers only apply to memory accesses involving a specific image subresource range. That is, a memory dependency formed from an image memory barrier is scoped to access via the specified image subresource range. Image memory barriers can also be used to define image layout transitions or a queue family ownership transfer for the specified image subresource range.

The VkImageMemoryBarrier structure is defined as:

typedef struct VkImageMemoryBarrier {
    VkStructureType            sType;
    const void*                pNext;
    VkAccessFlags              srcAccessMask;
    VkAccessFlags              dstAccessMask;
    VkImageLayout              oldLayout;
    VkImageLayout              newLayout;
    uint32_t                   srcQueueFamilyIndex;
    uint32_t                   dstQueueFamilyIndex;
    VkImage                    image;
    VkImageSubresourceRange    subresourceRange;
} VkImageMemoryBarrier;

The first access scope is limited to access to memory through the specified image subresource range, via access types in the source access mask specified by srcAccessMask. If srcAccessMask includes VK_ACCESS_HOST_WRITE_BIT, memory writes performed by that access type are also made visible, as that access type is not performed through a resource.

The second access scope is limited to access to memory through the specified image subresource range, via access types in the destination access mask specified by dstAccessMask. If dstAccessMask includes VK_ACCESS_HOST_WRITE_BIT or VK_ACCESS_HOST_READ_BIT, available memory writes are also made visible to accesses of those types, as those access types are not performed through a resource.

If srcQueueFamilyIndex is not equal to dstQueueFamilyIndex, and srcQueueFamilyIndex is equal to the current queue family, then the memory barrier defines a queue family release operation for the specified image subresource range, and the second access scope includes no access, as if dstAccessMask was 0.

If dstQueueFamilyIndex is not equal to srcQueueFamilyIndex, and dstQueueFamilyIndex is equal to the current queue family, then the memory barrier defines a queue family acquire operation for the specified image subresource range, and the first access scope includes no access, as if srcAccessMask was 0.

If oldLayout is not equal to newLayout, then the memory barrier defines an image layout transition for the specified image subresource range.

Layout transitions that are performed via image memory barriers execute in their entirety in submission order, relative to other image layout transitions submitted to the same queue, including those performed by render passes. In effect there is an implicit execution dependency from each such layout transition to all layout transitions previously submitted to the same queue.

Valid Usage
  • oldLayout must be VK_IMAGE_LAYOUT_UNDEFINED or the current layout of the image subresources affected by the barrier

  • newLayout must not be VK_IMAGE_LAYOUT_UNDEFINED or VK_IMAGE_LAYOUT_PREINITIALIZED

  • If image was created with a sharing mode of VK_SHARING_MODE_CONCURRENT, at least one of srcQueueFamilyIndex and dstQueueFamilyIndex must be VK_QUEUE_FAMILY_IGNORED

  • If image was created with a sharing mode of VK_SHARING_MODE_CONCURRENT, and one of srcQueueFamilyIndex and dstQueueFamilyIndex is VK_QUEUE_FAMILY_IGNORED, the other must be VK_QUEUE_FAMILY_IGNORED or VK_QUEUE_FAMILY_EXTERNAL_KHX

  • If image was created with a sharing mode of VK_SHARING_MODE_EXCLUSIVE and srcQueueFamilyIndex is VK_QUEUE_FAMILY_IGNORED, dstQueueFamilyIndex must also be VK_QUEUE_FAMILY_IGNORED.

  • If image was created with a sharing mode of VK_SHARING_MODE_EXCLUSIVE and srcQueueFamilyIndex is not VK_QUEUE_FAMILY_IGNORED, it must be a valid queue family or VK_QUEUE_FAMILY_EXTERNAL_KHX (see Queue Family Properties).

  • If image was created with a sharing mode of VK_SHARING_MODE_EXCLUSIVE and dstQueueFamilyIndex is not VK_QUEUE_FAMILY_IGNORED, it must be a valid queue family or VK_QUEUE_FAMILY_EXTERNAL_KHX (see Queue Family Properties).

  • If image was created with a sharing mode of VK_SHARING_MODE_EXCLUSIVE, and srcQueueFamilyIndex and dstQueueFamilyIndex are valid queue families, at least one of them must be the same as the family of the queue that will execute this barrier

  • subresourceRange must be a valid image subresource range for the image (see Image Views)

  • If image has a depth/stencil format with both depth and stencil components, then aspectMask member of subresourceRange must include both VK_IMAGE_ASPECT_DEPTH_BIT and VK_IMAGE_ASPECT_STENCIL_BIT

  • If either oldLayout or newLayout is VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL then image must have been created with VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT set

  • If either oldLayout or newLayout is VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL then image must have been created with VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT set

  • If either oldLayout or newLayout is VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL then image must have been created with VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT set

  • If either oldLayout or newLayout is VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL then image must have been created with VK_IMAGE_USAGE_SAMPLED_BIT or VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT set

  • If either oldLayout or newLayout is VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL then image must have been created with VK_IMAGE_USAGE_TRANSFER_SRC_BIT set

  • If either oldLayout or newLayout is VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL then image must have been created with VK_IMAGE_USAGE_TRANSFER_DST_BIT set

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER

  • pNext must be NULL

  • srcAccessMask must be a valid combination of VkAccessFlagBits values

  • dstAccessMask must be a valid combination of VkAccessFlagBits values

  • oldLayout must be a valid VkImageLayout value

  • newLayout must be a valid VkImageLayout value

  • image must be a valid VkImage handle

  • subresourceRange must be a valid VkImageSubresourceRange structure

6.7.4. Queue Family Ownership Transfer

Resources created with a VkSharingMode of VK_SHARING_MODE_EXCLUSIVE must have their ownership explicitly transferred from one queue family to another in order to access their content in a well-defined manner on a queue in a different queue family. Resources shared with external APIs or instances using external memory must also explicitly manage ownership transfers between local and external queues (or equivalent constructs in external APIs) regardless of the VkSharingMode specified when creating them. The special queue family index VK_QUEUE_FAMILY_EXTERNAL_KHX represents any queue external to the resource’s current Vulkan instance. If memory dependencies are correctly expressed between uses of such a resource between two queues in different families, but no ownership transfer is defined, the contents of that resource are undefined for any read accesses performed by the second queue family.

Note

If an application does not need the contents of a resource to remain valid when transferring from one queue family to another, then the ownership transfer should be skipped.

A queue family ownership transfer consists of two distinct parts:

  1. Release exclusive ownership from the source queue family

  2. Acquire exclusive ownership for the destination queue family

An application must ensure that these operations occur in the correct order by defining an execution dependency between them, e.g. using a semaphore.

A release operation is used to release exclusive ownership of a range of a buffer or image subresource range. A release operation is defined by executing a buffer memory barrier (for a buffer range) or an image memory barrier (for an image subresource range), on a queue from the source queue family. The srcQueueFamilyIndex parameter of the barrier must be set to the source queue family index, and the dstQueueFamilyIndex parameter to the destination queue family index. dstStageMask is ignored for such a barrier, such that no visibility operation is executed - the value of this mask does not affect the validity of the barrier. The release operation happens-after the availability operation.

An acquire operation is used to acquire exclusive ownership of a range of a buffer or image subresource range. An acquire operation is defined by executing a buffer memory barrier (for a buffer range) or an image memory barrier (for an image subresource range), on a queue from the destination queue family. The srcQueueFamilyIndex parameter of the barrier must be set to the source queue family index, and the dstQueueFamilyIndex parameter to the destination queue family index. srcStageMask is ignored for such a barrier, such that no availability operation is executed - the value of this mask does not affect the validity of the barrier. The acquire operation happens-before the visibility operation.

Note

Whilst it is not invalid to provide destination or source access masks for memory barriers used for release or acquire operations, respectively, they have no practical effect. Access after a release operation has undefined results, and so visibility for those accesses has no practical effect. Similarly, write access before an acquire operation will produce undefined results for future access, so availability of those writes has no practical use. In an earlier version of the specification, these were required to match on both sides - but this was subsequently relaxed. These masks should be set to 0.

If the transfer is via an image memory barrier, and an image layout transition is desired, then the values of oldLayout and newLayout in the release memory barrier must be equal to values of oldLayout and newLayout in the acquire memory barrier. Although the image layout transition is submitted twice, it will only be executed once. A layout transition specified in this way happens-after the release operation and happens-before the acquire operation.

If the values of srcQueueFamilyIndex and dstQueueFamilyIndex are equal, no ownership transfer is performed, and the barrier operates as if they were both set to VK_QUEUE_FAMILY_IGNORED.

Queue family ownership transfers may perform read and write accesses on all memory bound to the image subresource or buffer range, so applications must ensure that all memory writes have been made available before a queue family ownership transfer is executed. Available memory is automatically made visible to queue family release and acquire operations, and writes performed by those operations are automatically made available.

Once a queue family has acquired ownership of a buffer range or image subresource range of an VK_SHARING_MODE_EXCLUSIVE resource, its contents are undefined to other queue families unless ownership is transferred. The contents of any portion of another resource which aliases memory that is bound to the transferred buffer or image subresource range are undefined after a release or acquire operation.

6.8. Wait Idle Operations

To wait on the host for the completion of outstanding queue operations for a given queue, call:

VkResult vkQueueWaitIdle(
    VkQueue                                     queue);
  • queue is the queue on which to wait.

vkQueueWaitIdle is equivalent to submitting a fence to a queue and waiting with an infinite timeout for that fence to signal.

Valid Usage (Implicit)
  • queue must be a valid VkQueue handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_DEVICE_LOST

To wait on the host for the completion of outstanding queue operations for all queues on a given logical device, call:

VkResult vkDeviceWaitIdle(
    VkDevice                                    device);
  • device is the logical device to idle.

vkDeviceWaitIdle is equivalent to calling vkQueueWaitIdle for all queues owned by device.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

Host Synchronization
  • Host access to all VkQueue objects created from device must be externally synchronized

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_DEVICE_LOST

6.9. Host Write Ordering Guarantees

When batches of command buffers are submitted to a queue via vkQueueSubmit, it defines a memory dependency with prior host operations, and execution of command buffers submitted to the queue.

The first synchronization scope is defined by the host execution model, but includes execution of vkQueueSubmit on the host and anything that happened-before it.

The second synchronization scope includes every command submitted in the same queue submission command, and all future submissions to the same queue.

The first access scope includes all host writes to mappable device memory that are either coherent, or have been flushed with vkFlushMappedMemoryRanges.

The second access scope includes all memory access performed by the device.

6.10. Synchronization and Multiple Physical Devices

If a logical device includes more than one physical device, then fences, semaphores, and events all still have a single instance of the signaled state.

A fence becomes signaled when all physical devices complete the necessary queue operations.

Semaphore wait and signal operations all include a device index that is the sole physical device that performs the operation. These indices are provided in the VkDeviceGroupSubmitInfoKHX and VkDeviceGroupBindSparseInfoKHX structures. Semaphores are not exclusively owned by any physical device. For example, a semaphore can be signaled by one physical device and then waited on by a different physical device.

An event can only be waited on by the same physical device that signaled it (or the host).

7. Render Pass

A render pass represents a collection of attachments, subpasses, and dependencies between the subpasses, and describes how the attachments are used over the course of the subpasses. The use of a render pass in a command buffer is a render pass instance.

Render passes are represented by VkRenderPass handles:

VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkRenderPass)

An attachment description describes the properties of an attachment including its format, sample count, and how its contents are treated at the beginning and end of each render pass instance.

A subpass represents a phase of rendering that reads and writes a subset of the attachments in a render pass. Rendering commands are recorded into a particular subpass of a render pass instance.

A subpass description describes the subset of attachments that is involved in the execution of a subpass. Each subpass can read from some attachments as input attachments, write to some as color attachments or depth/stencil attachments, and perform multisample resolve operations to resolve attachments. A subpass description can also include a set of preserve attachments, which are attachments that are not read or written by the subpass but whose contents must be preserved throughout the subpass.

A subpass uses an attachment if the attachment is a color, depth/stencil, resolve, or input attachment for that subpass (as determined by the pColorAttachments, pDepthStencilAttachment, pResolveAttachments, and pInputAttachments members of VkSubpassDescription, respectively). A subpass does not use an attachment if that attachment is preserved by the subpass. The first use of an attachment is in the lowest numbered subpass that uses that attachment. Similarly, the last use of an attachment is in the highest numbered subpass that uses that attachment.

The subpasses in a render pass all render to the same dimensions, and fragments for pixel (x,y,layer) in one subpass can only read attachment contents written by previous subpasses at that same (x,y,layer) location.

Note

By describing a complete set of subpasses in advance, render passes provide the implementation an opportunity to optimize the storage and transfer of attachment data between subpasses.

In practice, this means that subpasses with a simple framebuffer-space dependency may be merged into a single tiled rendering pass, keeping the attachment data on-chip for the duration of a render pass instance. However, it is also quite common for a render pass to only contain a single subpass.

Subpass dependencies describe execution and memory dependencies between subpasses.

A subpass dependency chain is a sequence of subpass dependencies in a render pass, where the source subpass of each subpass dependency (after the first) equals the destination subpass of the previous dependency.

Execution of subpasses may overlap or execute out of order with regards to other subpasses, unless otherwise enforced by an execution dependency. Each subpass only respects submission order for commands recorded in the same subpass, and the vkCmdBeginRenderPass and vkCmdEndRenderPass commands that delimit the render pass - commands within other subpasses are not included. This affects most other implicit ordering guarantees.

A render pass describes the structure of subpasses and attachments independent of any specific image views for the attachments. The specific image views that will be used for the attachments, and their dimensions, are specified in VkFramebuffer objects. Framebuffers are created with respect to a specific render pass that the framebuffer is compatible with (see Render Pass Compatibility). Collectively, a render pass and a framebuffer define the complete render target state for one or more subpasses as well as the algorithmic dependencies between the subpasses.

The various pipeline stages of the drawing commands for a given subpass may execute concurrently and/or out of order, both within and across drawing commands, whilst still respecting pipeline order. However for a given (x,y,layer,sample) sample location, certain per-sample operations are performed in rasterization order.

7.1. Render Pass Creation

To create a render pass, call:

VkResult vkCreateRenderPass(
    VkDevice                                    device,
    const VkRenderPassCreateInfo*               pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkRenderPass*                               pRenderPass);
  • device is the logical device that creates the render pass.

  • pCreateInfo is a pointer to an instance of the VkRenderPassCreateInfo structure that describes the parameters of the render pass.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pRenderPass points to a VkRenderPass handle in which the resulting render pass object is returned.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pCreateInfo must be a pointer to a valid VkRenderPassCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pRenderPass must be a pointer to a VkRenderPass handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The VkRenderPassCreateInfo structure is defined as:

typedef struct VkRenderPassCreateInfo {
    VkStructureType                   sType;
    const void*                       pNext;
    VkRenderPassCreateFlags           flags;
    uint32_t                          attachmentCount;
    const VkAttachmentDescription*    pAttachments;
    uint32_t                          subpassCount;
    const VkSubpassDescription*       pSubpasses;
    uint32_t                          dependencyCount;
    const VkSubpassDependency*        pDependencies;
} VkRenderPassCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is reserved for future use.

  • attachmentCount is the number of attachments used by this render pass, or zero indicating no attachments. Attachments are referred to by zero-based indices in the range [0,attachmentCount).

  • pAttachments points to an array of attachmentCount number of VkAttachmentDescription structures describing properties of the attachments, or NULL if attachmentCount is zero.

  • subpassCount is the number of subpasses to create for this render pass. Subpasses are referred to by zero-based indices in the range [0,subpassCount). A render pass must have at least one subpass.

  • pSubpasses points to an array of subpassCount number of VkSubpassDescription structures describing properties of the subpasses.

  • dependencyCount is the number of dependencies between pairs of subpasses, or zero indicating no dependencies.

  • pDependencies points to an array of dependencyCount number of VkSubpassDependency structures describing dependencies between pairs of subpasses, or NULL if dependencyCount is zero.

Valid Usage
  • If any two subpasses operate on attachments with overlapping ranges of the same VkDeviceMemory object, and at least one subpass writes to that area of VkDeviceMemory, a subpass dependency must be included (either directly or via some intermediate subpasses) between them

  • If the attachment member of any element of pInputAttachments, pColorAttachments, pResolveAttachments or pDepthStencilAttachment, or the attachment indexed by any element of pPreserveAttachments in any given element of pSubpasses is bound to a range of a VkDeviceMemory object that overlaps with any other attachment in any subpass (including the same subpass), the VkAttachmentDescription structures describing them must include VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT in flags

  • If the attachment member of any element of pInputAttachments, pColorAttachments, pResolveAttachments or pDepthStencilAttachment, or any element of pPreserveAttachments in any given element of pSubpasses is not VK_ATTACHMENT_UNUSED, it must be less than attachmentCount

  • The value of any element of the pPreserveAttachments member in any given element of pSubpasses must not be VK_ATTACHMENT_UNUSED

  • For any member of pAttachments with a loadOp equal to VK_ATTACHMENT_LOAD_OP_CLEAR, the first use of that attachment must not specify a layout equal to VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL or VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL.

  • For any element of pDependencies, if the srcSubpass is not VK_SUBPASS_EXTERNAL, all stage flags included in the srcStageMask member of that dependency must be a pipeline stage supported by the pipeline identified by the pipelineBindPoint member of the source subpass.

  • For any element of pDependencies, if the dstSubpass is not VK_SUBPASS_EXTERNAL, all stage flags included in the dstStageMask member of that dependency must be a pipeline stage supported by the pipeline identified by the pipelineBindPoint member of the source subpass.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO

  • pNext must be NULL

  • flags must be 0

  • If attachmentCount is not 0, pAttachments must be a pointer to an array of attachmentCount valid VkAttachmentDescription structures

  • pSubpasses must be a pointer to an array of subpassCount valid VkSubpassDescription structures

  • If dependencyCount is not 0, pDependencies must be a pointer to an array of dependencyCount valid VkSubpassDependency structures

  • subpassCount must be greater than 0

If the VkRenderPassCreateInfo::pNext list includes a VkRenderPassMultiviewCreateInfoKHX structure, then that structure includes an array of view masks, view offsets, and correlation masks for the render pass.

The VkRenderPassMultiviewCreateInfoKHX structure is defined as:

typedef struct VkRenderPassMultiviewCreateInfoKHX {
    VkStructureType    sType;
    const void*        pNext;
    uint32_t           subpassCount;
    const uint32_t*    pViewMasks;
    uint32_t           dependencyCount;
    const int32_t*     pViewOffsets;
    uint32_t           correlationMaskCount;
    const uint32_t*    pCorrelationMasks;
} VkRenderPassMultiviewCreateInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • subpassCount is zero or is the number of subpasses in the render pass.

  • pViewMasks points to an array of subpassCount number of view masks, where each mask is a bitfield of view indices describing which views rendering is broadcast to in each subpass, when multiview is enabled. If subpassCount is zero, each view mask is treated as zero.

  • dependencyCount is zero or the number of dependencies in the render pass.

  • pViewOffsets points to an array of dependencyCount view offsets, one for each dependency. If dependencyCount is zero, each dependency’s view offset is treated as zero. Each view offset controls which views in the source subpass the views in the destination subpass depend on.

  • correlationMaskCount is zero or a number of correlation masks.

  • pCorrelationMasks is an array of view masks indicating sets of views that may be more efficient to render concurrently.

When a subpass uses a non-zero view mask, multiview functionality is considered to be enabled. Multiview is all-or-nothing for a render pass - that is, either all subpasses must have a non-zero view mask (though some subpasses may have only one view) or all must be zero. Multiview causes all drawing and clear commands in the subpass to behave as if they were broadcast to each view, where a view is represented by one layer of the framebuffer attachments. All draws and clears are broadcast to each view index whose bit is set in the view mask. The view index is provided in the ViewIndex shader input variable, and color, depth/stencil, and input attachments all read/write the layer of the framebuffer corresponding to the view index.

If the view mask is zero for all subpasses, multiview is considered to be disabled and all drawing commands execute normally, without this additional broadcasting.

Some implementations may not support multiview in conjunction with geometry shaders or tessellation shaders.

When multiview is enabled, the VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX bit in a dependency can be used to express a view-local dependency, meaning that each view in the destination subpass depends on a single view in the source subpass. Unlike pipeline barriers, a subpass dependency can potentially have a different view mask in the source subpass and the destination subpass. If the dependency is view-local, then each view (dstView) in the destination subpass depends on the view dstView
pViewOffsets[dependency]
in the source subpass. If there is not such a view in the source subpass, then this dependency does not affect that view in the destination subpass. If the dependency is not view-local, then all views in the destination subpass depend on all views in the source subpass, and the view offset is ignored. A non-zero view offset is not allowed in a self-dependency.

The elements of pCorrelationMasks are a set of masks of views indicating that views in the same mask may exhibit spatial coherency between the views, making it more efficient to render them concurrently. Correlation masks must not have a functional effect on the results of the multiview rendering.

When multiview is enabled, at the beginning of each subpass all non-render pass state is undefined. In particular, each time vkCmdBeginRenderPass or vkCmdNextSubpass is called the graphics pipeline must be bound, any relevant descriptor sets or vertex/index buffers must be bound, and any relevant dynamic state or push constants must be set before they are used.

A multiview subpass can declare that its shaders will write per-view attributes for all views in a single invocation, by setting the VK_SUBPASS_DESCRIPTION_PER_VIEW_ATTRIBUTES_BIT_NVX bit in the subpass description. The only supported per-view attributes are position and viewport mask, and per-view position and viewport masks are written to output array variables decorated with PositionPerViewNV and ViewportMaskPerViewNV, respectively. If VK_NV_viewport_array2 is not supported and enabled, ViewportMaskPerViewNV must not be used. Values written to elements of PositionPerViewNV and ViewportMaskPerViewNV must not depend on the ViewIndex. The shader must also write to an output variable decorated with Position, and the value written to Position must equal the value written to PositionPerViewNV[ViewIndex]. Similarly, if ViewportMaskPerViewNV is written to then the shader must also write to an output variable decorated with ViewportMaskNV, and the value written to ViewportMaskNV must equal the value written to ViewportMaskPerViewNV[ViewIndex]. Implementations will either use values taken from Position and ViewportMaskNV and invoke the shader once for each view, or will use values taken from PositionPerViewNV and ViewportMaskPerViewNV and invoke the shader fewer times. The values written to Position and ViewportMaskNV must not depend on the values written to PositionPerViewNV and ViewportMaskPerViewNV, or vice versa (to allow compilers to eliminate the unused outputs). All attributes that do not have *PerViewNV counterparts must not depend on ViewIndex.

Per-view attributes are all-or-nothing for a subpass. That is, all pipelines compiled against a subpass that includes the VK_SUBPASS_DESCRIPTION_PER_VIEW_ATTRIBUTES_BIT_NVX bit must write per-view attributes to the *PerViewNV[] shader outputs, in addition to the non-per-view (e.g. Position) outputs. Pipelines compiled against a subpass that does not include this bit must not include the *PerViewNV[] outputs in their interfaces.

Valid Usage
  • If subpassCount is not zero, subpassCount must be equal to the subpassCount in the VkRenderPassCreateInfo structure at the start of the chain

  • If dependencyCount is not zero, dependencyCount must be equal to the dependencyCount in the VkRenderPassCreateInfo structure at the start of the chain

  • Each view index must not be set in more than one element of pCorrelationMasks

  • If an element of pViewOffsets is non-zero, the corresponding VkSubpassDependency structure must have different values of srcSubpass and dstSubpass.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_RENDER_PASS_MULTIVIEW_CREATE_INFO_KHX

  • pNext must be NULL

  • If subpassCount is not 0, pViewMasks must be a pointer to an array of subpassCount uint32_t values

  • If dependencyCount is not 0, pViewOffsets must be a pointer to an array of dependencyCount int32_t values

  • If correlationMaskCount is not 0, pCorrelationMasks must be a pointer to an array of correlationMaskCount uint32_t values

The VkAttachmentDescription structure is defined as:

typedef struct VkAttachmentDescription {
    VkAttachmentDescriptionFlags    flags;
    VkFormat                        format;
    VkSampleCountFlagBits           samples;
    VkAttachmentLoadOp              loadOp;
    VkAttachmentStoreOp             storeOp;
    VkAttachmentLoadOp              stencilLoadOp;
    VkAttachmentStoreOp             stencilStoreOp;
    VkImageLayout                   initialLayout;
    VkImageLayout                   finalLayout;
} VkAttachmentDescription;
  • flags is a bitmask describing additional properties of the attachment. Bits which can be set include:

    typedef enum VkAttachmentDescriptionFlagBits {
        VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT = 0x00000001,
    } VkAttachmentDescriptionFlagBits;
  • format is a VkFormat value specifying the format of the image that will be used for the attachment.

  • samples is the number of samples of the image as defined in VkSampleCountFlagBits.

  • loadOp specifies how the contents of color and depth components of the attachment are treated at the beginning of the subpass where it is first used:

    typedef enum VkAttachmentLoadOp {
        VK_ATTACHMENT_LOAD_OP_LOAD = 0,
        VK_ATTACHMENT_LOAD_OP_CLEAR = 1,
        VK_ATTACHMENT_LOAD_OP_DONT_CARE = 2,
    } VkAttachmentLoadOp;
    • VK_ATTACHMENT_LOAD_OP_LOAD means the previous contents of the image within the render area will be preserved. For attachments with a depth/stencil format, this uses the access type VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT. For attachments with a color format, this uses the access type VK_ACCESS_COLOR_ATTACHMENT_READ_BIT.

    • VK_ATTACHMENT_LOAD_OP_CLEAR means the contents within the render area will be cleared to a uniform value, which is specified when a render pass instance is begun. For attachments with a depth/stencil format, this uses the access type VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT. For attachments with a color format, this uses the access type VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT.

    • VK_ATTACHMENT_LOAD_OP_DONT_CARE means the previous contents within the area need not be preserved; the contents of the attachment will be undefined inside the render area. For attachments with a depth/stencil format, this uses the access type VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT. For attachments with a color format, this uses the access type VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT.

  • storeOp specifies how the contents of color and depth components of the attachment are treated at the end of the subpass where it is last used:

    typedef enum VkAttachmentStoreOp {
        VK_ATTACHMENT_STORE_OP_STORE = 0,
        VK_ATTACHMENT_STORE_OP_DONT_CARE = 1,
    } VkAttachmentStoreOp;
    • VK_ATTACHMENT_STORE_OP_STORE means the contents generated during the render pass and within the render area are written to memory. For attachments with a depth/stencil format, this uses the access type VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT. For attachments with a color format, this uses the access type VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT.

    • VK_ATTACHMENT_STORE_OP_DONT_CARE means the contents within the render area are not needed after rendering, and may be discarded; the contents of the attachment will be undefined inside the render area. For attachments with a depth/stencil format, this uses the access type VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT. For attachments with a color format, this uses the access type VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT.

  • stencilLoadOp specifies how the contents of stencil components of the attachment are treated at the beginning of the subpass where it is first used, and must be one of the same values allowed for loadOp above.

  • stencilStoreOp specifies how the contents of stencil components of the attachment are treated at the end of the last subpass where it is used, and must be one of the same values allowed for storeOp above.

  • initialLayout is the layout the attachment image subresource will be in when a render pass instance begins.

  • finalLayout is the layout the attachment image subresource will be transitioned to when a render pass instance ends. During a render pass instance, an attachment can use a different layout in each subpass, if desired.

If the attachment uses a color format, then loadOp and storeOp are used, and stencilLoadOp and stencilStoreOp are ignored. If the format has depth and/or stencil components, loadOp and storeOp apply only to the depth data, while stencilLoadOp and stencilStoreOp define how the stencil data is handled. loadOp and stencilLoadOp define the load operations that execute as part of the first subpass that uses the attachment. storeOp and stencilStoreOp define the store operations that execute as part of the last subpass that uses the attachment.

The load operation for each value in an attachment used by a subpass happens-before any command recorded into that subpass reads from that value. Load operations for attachments with a depth/stencil format execute in the VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT pipeline stage. Load operations for attachments with a color format execute in the VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT pipeline stage.

Store operations for each value in an attachment used by a subpass happen-after any command recorded into that subpass writes to that value. Store operations for attachments with a depth/stencil format execute in the VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT pipeline stage. Store operations for attachments with a color format execute in the VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT pipeline stage.

If an attachment is not used by any subpass, then loadOp, storeOp, stencilStoreOp, and stencilLoadOp are ignored, and the attachment’s memory contents will not be modified by execution of a render pass instance.

The load and store operations apply on the first and last use of each view in the render pass, respectively. If a view index of an attachment is not included in the view mask in any subpass that uses it, then the load and store operations are ignored, and the attachment’s memory contents will not be modified by execution of a render pass instance.

During a render pass instance, input/color attachments with color formats that have a component size of 8, 16, or 32 bits must be represented in the attachment’s format throughout the instance. Attachments with other floating- or fixed-point color formats, or with depth components may be represented in a format with a precision higher than the attachment format, but must be represented with the same range. When such a component is loaded via the loadOp, it will be converted into an implementation-dependent format used by the render pass. Such components must be converted from the render pass format, to the format of the attachment, before they are resolved or stored at the end of a render pass instance via storeOp. Conversions occur as described in Numeric Representation and Computation and Fixed-Point Data Conversions.

If flags includes VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT, then the attachment is treated as if it shares physical memory with another attachment in the same render pass. This information limits the ability of the implementation to reorder certain operations (like layout transitions and the loadOp) such that it is not improperly reordered against other uses of the same physical memory via a different attachment. This is described in more detail below.

Valid Usage
  • finalLayout must not be VK_IMAGE_LAYOUT_UNDEFINED or VK_IMAGE_LAYOUT_PREINITIALIZED

Valid Usage (Implicit)

If a render pass uses multiple attachments that alias the same device memory, those attachments must each include the VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT bit in their attachment description flags. Attachments aliasing the same memory occurs in multiple ways:

  • Multiple attachments being assigned the same image view as part of framebuffer creation.

  • Attachments using distinct image views that correspond to the same image subresource of an image.

  • Attachments using views of distinct image subresources which are bound to overlapping memory ranges.

Note

Render passes must include subpass dependencies (either directly or via a subpass dependency chain) between any two subpasses that operate on the same attachment or aliasing attachments and those subpass dependencies must include execution and memory dependencies separating uses of the aliases, if at least one of those subpasses writes to one of the aliases. These dependencies must not include the VK_DEPENDENCY_BY_REGION_BIT if the aliases are views of distinct image subresources which overlap in memory.

Multiple attachments that alias the same memory must not be used in a single subpass. A given attachment index must not be used multiple times in a single subpass, with one exception: two subpass attachments can use the same attachment index if at least one use is as an input attachment and neither use is as a resolve or preserve attachment. In other words, the same view can be used simultaneously as an input and color or depth/stencil attachment, but must not be used as multiple color or depth/stencil attachments nor as resolve or preserve attachments. The precise set of valid scenarios is described in more detail below.

If a set of attachments alias each other, then all except the first to be used in the render pass must use an initialLayout of VK_IMAGE_LAYOUT_UNDEFINED, since the earlier uses of the other aliases make their contents undefined. Once an alias has been used and a different alias has been used after it, the first alias must not be used in any later subpasses. However, an application can assign the same image view to multiple aliasing attachment indices, which allows that image view to be used multiple times even if other aliases are used in between.

Note

Once an attachment needs the VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT bit, there should be no additional cost of introducing additional aliases, and using these additional aliases may allow more efficient clearing of the attachments on multiple uses via VK_ATTACHMENT_LOAD_OP_CLEAR.

The VkSubpassDescription structure is defined as:

typedef struct VkSubpassDescription {
    VkSubpassDescriptionFlags       flags;
    VkPipelineBindPoint             pipelineBindPoint;
    uint32_t                        inputAttachmentCount;
    const VkAttachmentReference*    pInputAttachments;
    uint32_t                        colorAttachmentCount;
    const VkAttachmentReference*    pColorAttachments;
    const VkAttachmentReference*    pResolveAttachments;
    const VkAttachmentReference*    pDepthStencilAttachment;
    uint32_t                        preserveAttachmentCount;
    const uint32_t*                 pPreserveAttachments;
} VkSubpassDescription;
  • flags is a bitmask indicating usage of the subpass. Bits which can be set include:

    typedef enum VkSubpassDescriptionFlagBits {
        VK_SUBPASS_DESCRIPTION_PER_VIEW_ATTRIBUTES_BIT_NVX = 0x00000001,
        VK_SUBPASS_DESCRIPTION_PER_VIEW_POSITION_X_ONLY_BIT_NVX = 0x00000002,
    } VkSubpassDescriptionFlagBits;
    • VK_SUBPASS_DESCRIPTION_PER_VIEW_ATTRIBUTES_BIT_NVX indicates that shaders compiled for this subpass write the attributes for all views in a single invocation of each vertex processing stage. All pipelines compiled against a subpass that includes this bit must write per-view attributes to the *PerViewNV[] shader outputs, in addition to the non-per-view (e.g. Position) outputs.

    • VK_SUBPASS_DESCRIPTION_PER_VIEW_POSITION_X_ONLY_BIT_NVX indicates that shaders compiled for this subpass use per-view positions which only differ in value in the x component. Per-view viewport mask can also be used.

  • pipelineBindPoint is a VkPipelineBindPoint value specifying whether this is a compute or graphics subpass. Currently, only graphics subpasses are supported.

  • inputAttachmentCount is the number of input attachments.

  • pInputAttachments is an array of VkAttachmentReference structures (defined below) that lists which of the render pass’s attachments can be read in the shader during the subpass, and what layout each attachment will be in during the subpass. Each element of the array corresponds to an input attachment unit number in the shader, i.e. if the shader declares an input variable layout(input_attachment_index=X, set=Y, binding=Z) then it uses the attachment provided in pInputAttachments[X]. Input attachments must also be bound to the pipeline with a descriptor set, with the input attachment descriptor written in the location (set=Y, binding=Z).

  • colorAttachmentCount is the number of color attachments.

  • pColorAttachments is an array of colorAttachmentCount VkAttachmentReference structures that lists which of the render pass’s attachments will be used as color attachments in the subpass, and what layout each attachment will be in during the subpass. Each element of the array corresponds to a fragment shader output location, i.e. if the shader declared an output variable layout(location=X) then it uses the attachment provided in pColorAttachments[X].

  • pResolveAttachments is NULL or an array of colorAttachmentCount VkAttachmentReference structures that lists which of the render pass’s attachments are resolved to at the end of the subpass, and what layout each attachment will be in during the multisample resolve operation. If pResolveAttachments is not NULL, each of its elements corresponds to a color attachment (the element in pColorAttachments at the same index), and a multisample resolve operation is defined for each attachment. At the end of each subpass, multisample resolve operations read the subpass’s color attachments, and resolve the samples for each pixel to the same pixel location in the corresponding resolve attachments, unless the resolve attachment index is VK_ATTACHMENT_UNUSED. If the first use of an attachment in a render pass is as a resolve attachment, then the loadOp is effectively ignored as the resolve is guaranteed to overwrite all pixels in the render area.

  • pDepthStencilAttachment is a pointer to a VkAttachmentReference specifying which attachment will be used for depth/stencil data and the layout it will be in during the subpass. Setting the attachment index to VK_ATTACHMENT_UNUSED or leaving this pointer as NULL indicates that no depth/stencil attachment will be used in the subpass.

  • preserveAttachmentCount is the number of preserved attachments.

  • pPreserveAttachments is an array of preserveAttachmentCount render pass attachment indices describing the attachments that are not used by a subpass, but whose contents must be preserved throughout the subpass.

The contents of an attachment within the render area become undefined at the start of a subpass S if all of the following conditions are true:

  • The attachment is used as a color, depth/stencil, or resolve attachment in any subpass in the render pass.

  • There is a subpass S1 that uses or preserves the attachment, and a subpass dependency from S1 to S.

  • The attachment is not used or preserved in subpass S.

Once the contents of an attachment become undefined in subpass S, they remain undefined for subpasses in subpass dependency chains starting with subpass S until they are written again. However, they remain valid for subpasses in other subpass dependency chains starting with subpass S1 if those subpasses use or preserve the attachment.

Valid Usage
  • pipelineBindPoint must be VK_PIPELINE_BIND_POINT_GRAPHICS

  • colorAttachmentCount must be less than or equal to VkPhysicalDeviceLimits::maxColorAttachments

  • If the first use of an attachment in this render pass is as an input attachment, and the attachment is not also used as a color or depth/stencil attachment in the same subpass, then loadOp must not be VK_ATTACHMENT_LOAD_OP_CLEAR

  • If pResolveAttachments is not NULL, for each resolve attachment that does not have the value VK_ATTACHMENT_UNUSED, the corresponding color attachment must not have the value VK_ATTACHMENT_UNUSED

  • If pResolveAttachments is not NULL, the sample count of each element of pColorAttachments must be anything other than VK_SAMPLE_COUNT_1_BIT

  • Any given element of pResolveAttachments must have a sample count of VK_SAMPLE_COUNT_1_BIT

  • Any given element of pResolveAttachments must have the same VkFormat as its corresponding color attachment

  • All attachments in pColorAttachments and pDepthStencilAttachment that are not VK_ATTACHMENT_UNUSED must have the same sample count

  • If any input attachments are VK_ATTACHMENT_UNUSED, then any pipelines bound during the subpass must not access those input attachments from the fragment shader

  • The attachment member of any element of pPreserveAttachments must not be VK_ATTACHMENT_UNUSED

  • Any given element of pPreserveAttachments must not also be an element of any other member of the subpass description

  • If any attachment is used as both an input attachment and a color or depth/stencil attachment, then each use must use the same layout

  • If flags includes VK_SUBPASS_DESCRIPTION_PER_VIEW_POSITION_X_ONLY_BIT_NVX, it must also include VK_SUBPASS_DESCRIPTION_PER_VIEW_ATTRIBUTES_BIT_NVX.

Valid Usage (Implicit)
  • flags must be a valid combination of VkSubpassDescriptionFlagBits values

  • pipelineBindPoint must be a valid VkPipelineBindPoint value

  • If inputAttachmentCount is not 0, pInputAttachments must be a pointer to an array of inputAttachmentCount valid VkAttachmentReference structures

  • If colorAttachmentCount is not 0, pColorAttachments must be a pointer to an array of colorAttachmentCount valid VkAttachmentReference structures

  • If colorAttachmentCount is not 0, and pResolveAttachments is not NULL, pResolveAttachments must be a pointer to an array of colorAttachmentCount valid VkAttachmentReference structures

  • If pDepthStencilAttachment is not NULL, pDepthStencilAttachment must be a pointer to a valid VkAttachmentReference structure

  • If preserveAttachmentCount is not 0, pPreserveAttachments must be a pointer to an array of preserveAttachmentCount uint32_t values

The VkAttachmentReference structure is defined as:

typedef struct VkAttachmentReference {
    uint32_t         attachment;
    VkImageLayout    layout;
} VkAttachmentReference;
  • attachment is the index of the attachment of the render pass, and corresponds to the index of the corresponding element in the pAttachments array of the VkRenderPassCreateInfo structure. If any color or depth/stencil attachments are VK_ATTACHMENT_UNUSED, then no writes occur for those attachments.

  • layout is a VkImageLayout value specifying the layout the attachment uses during the subpass.

Valid Usage
  • layout must not be VK_IMAGE_LAYOUT_UNDEFINED or VK_IMAGE_LAYOUT_PREINITIALIZED

Valid Usage (Implicit)

The VkSubpassDependency structure is defined as:

typedef struct VkSubpassDependency {
    uint32_t                srcSubpass;
    uint32_t                dstSubpass;
    VkPipelineStageFlags    srcStageMask;
    VkPipelineStageFlags    dstStageMask;
    VkAccessFlags           srcAccessMask;
    VkAccessFlags           dstAccessMask;
    VkDependencyFlags       dependencyFlags;
} VkSubpassDependency;

If srcSubpass is equal to dstSubpass then the VkSubpassDependency describes a subpass self-dependency, and only constrains the pipeline barriers allowed within a subpass instance. Otherwise, when a render pass instance which includes a subpass dependency is submitted to a queue, it defines a memory dependency between the subpasses identified by srcSubpass and dstSubpass.

If srcSubpass is equal to VK_SUBPASS_EXTERNAL, the first synchronization scope includes commands submitted to the queue before the render pass instance began. Otherwise, the first set of commands includes all commands submitted as part of the subpass instance identified by srcSubpass and any load, store or multisample resolve operations on attachments used in srcSubpass. In either case, the first synchronization scope is limited to operations on the pipeline stages determined by the source stage mask specified by srcStageMask.

If dstSubpass is equal to VK_SUBPASS_EXTERNAL, the second synchronization scope includes commands submitted after the render pass instance is ended. Otherwise, the second set of commands includes all commands submitted as part of the subpass instance identified by dstSubpass and any load, store or multisample resolve operations on attachments used in dstSubpass. In either case, the second synchronization scope is limited to operations on the pipeline stages determined by the destination stage mask specified by dstStageMask.

The first access scope is limited to access in the pipeline stages determined by the source stage mask specified by srcStageMask. It is also limited to access types in the source access mask specified by srcAccessMask.

The second access scope is limited to access in the pipeline stages determined by the destination stage mask specified by dstStageMask. It is also limited to access types in the destination access mask specified by dstAccessMask.

The availability and visibility operations defined by a subpass dependency affect the execution of image layout transitions within the render pass.

Valid Usage
  • If srcSubpass is not VK_SUBPASS_EXTERNAL, srcStageMask must not include VK_PIPELINE_STAGE_HOST_BIT

  • If dstSubpass is not VK_SUBPASS_EXTERNAL, dstStageMask must not include VK_PIPELINE_STAGE_HOST_BIT

  • If the geometry shaders feature is not enabled, srcStageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the geometry shaders feature is not enabled, dstStageMask must not contain VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT

  • If the tessellation shaders feature is not enabled, srcStageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • If the tessellation shaders feature is not enabled, dstStageMask must not contain VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT or VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT

  • srcSubpass must be less than or equal to dstSubpass, unless one of them is VK_SUBPASS_EXTERNAL, to avoid cyclic dependencies and ensure a valid execution order

  • srcSubpass and dstSubpass must not both be equal to VK_SUBPASS_EXTERNAL

  • If srcSubpass is equal to dstSubpass, srcStageMask and dstStageMask must only contain one of VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT, VK_PIPELINE_STAGE_VERTEX_INPUT_BIT, VK_PIPELINE_STAGE_VERTEX_SHADER_BIT, VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT, VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT, VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT, VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT, VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT, VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT, VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT, or VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT

  • If srcSubpass is equal to dstSubpass and not all of the stages in srcStageMask and dstStageMask are framebuffer-space stages, the logically latest pipeline stage in srcStageMask must be logically earlier than or equal to the logically earliest pipeline stage in dstStageMask

  • Any access flag included in srcAccessMask must be supported by one of the pipeline stages in srcStageMask, as specified in the table of supported access types.

  • Any access flag included in dstAccessMask must be supported by one of the pipeline stages in dstStageMask, as specified in the table of supported access types.

  • If dependencyFlags includes VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX, then both srcSubpass and dstSubpass must not equal VK_SUBPASS_EXTERNAL

  • If dependencyFlags includes VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX, then the render pass must have multiview enabled

  • If srcSubpass equals dstSubpass and that subpass has more than one bit set in the view mask, then dependencyFlags must include VK_DEPENDENCY_VIEW_LOCAL_BIT_KHX

Valid Usage (Implicit)

When multiview is enabled, the execution of the multiple views of one subpass may not occur simultaneously or even back-to-back, and rather may be interleaved with the execution of other subpasses. The load and store operations apply to attachments on a per-view basis. For example, an attachment using VK_ATTACHMENT_LOAD_OP_CLEAR will have each view cleared on first use, but the first use of one view may be temporally distant from the first use of another view.

Note

A good mental model for multiview is to think of a multiview subpass as if it were a collection of individual (per-view) subpasses that are logically grouped together and described as a single multiview subpass in the API. Similarly, a multiview attachment can be thought of like several individual attachments that happen to be layers in a single image. A view-local dependency between two multiview subpasses acts like a set of one-to-one dependencies between corresponding pairs of per-view subpasses. A view-global dependency between two multiview subpasses acts like a set of N × M dependencies between all pairs of per-view subpasses in the source and destination. Thus, it is a more compact representation which also makes clear the commonality and reuse that is present between views in a subpass. This interpretation motivates the answers to questions like "when does the load op apply" - it is on the first use of each view of an attachment, as if each view were a separate attachment.

If there is no subpass dependency from VK_SUBPASS_EXTERNAL to the first subpass that uses an attachment, then an implicit subpass dependency exists from VK_SUBPASS_EXTERNAL to the first subpass it is used in. The subpass dependency operates as if defined with the following parameters:

VkSubpassDependency implicitDependency = {
    .srcSubpass = VK_SUBPASS_EXTERNAL;
    .dstSubpass = firstSubpass; // First subpass attachment is used in
    .srcStageMask = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT;
    .dstStageMask = VK_PIPELINE_STAGE_ALL_COMMANDS_BIT;
    .srcAccessMask = 0;
    .dstAccessMask = VK_ACCESS_INPUT_ATTACHMENT_READ_BIT |
                     VK_ACCESS_COLOR_ATTACHMENT_READ_BIT |
                     VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT |
                     VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT |
                     VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
    .dependencyFlags = 0;
};

Similarly, if there is no subpass dependency from the last subpass that uses an attachment to VK_SUBPASS_EXTERNAL, then an implicit subpass dependency exists from the last subpass it is used in to VK_SUBPASS_EXTERNAL. The subpass dependency operates as if defined with the following parameters:

VkSubpassDependency implicitDependency = {
    .srcSubpass = lastSubpass; // Last subpass attachment is used in
    .dstSubpass = VK_SUBPASS_EXTERNAL;
    .srcStageMask = VK_PIPELINE_STAGE_ALL_COMMANDS_BIT;
    .dstStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT;
    .srcAccessMask = VK_ACCESS_INPUT_ATTACHMENT_READ_BIT |
                     VK_ACCESS_COLOR_ATTACHMENT_READ_BIT |
                     VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT |
                     VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT |
                     VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
    .dstAccessMask = 0;
    .dependencyFlags = 0;
};

As subpasses may overlap or execute out of order with regards to other subpasses unless a subpass dependency chain describes otherwise, the layout transitions required between subpasses cannot be known to an application. Instead, an application provides the layout that each attachment must be in at the start and end of a renderpass, and the layout it must be in during each subpass it is used in. The implementation then must execute layout transitions between subpasses in order to guarantee that the images are in the layouts required by each subpass, and in the final layout at the end of the render pass.

Automatic layout transitions away from the layout used in a subpass happen-after the availability operations for all dependencies with that subpass as the srcSubpass.

Automatic layout transitions into the layout used in a subpass happen-before the visibility operations for all dependencies with that subpass as the dstSubpass.

Automatic layout transitions away from initialLayout happens-after the availability operations for all dependencies with a srcSubpass equal to VK_SUBPASS_EXTERNAL, where dstSubpass uses the attachment that will be transitioned. For attachments created with VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT, automatic layout transitions away from initialLayout happen-after the availability operations for all dependencies with a srcSubpass equal to VK_SUBPASS_EXTERNAL, where dstSubpass uses any aliased attachment.

Automatic layout transitions into finalLayout happens-before the visibility operations for all dependencies with a dstSubpass equal to VK_SUBPASS_EXTERNAL, where srcSubpass uses the attachment that will be transitioned. For attachments created with VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT, automatic layout transitions into finalLayout happen-before the visibility operations for all dependencies with a dstSubpass equal to VK_SUBPASS_EXTERNAL, where srcSubpass uses any aliased attachment.

If two subpasses use the same attachment in different layouts, and both layouts are read-only, no subpass dependency needs to be specified between those subpasses. If an implementation treats those layouts separately, it must insert an implicit subpass dependency between those subpasses to separate the uses in each layout. The subpass dependency operates as if defined with the following parameters:

// Used for input attachments
VkPipelineStageFlags inputAttachmentStages = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
VkAccessFlags inputAttachmentAccess = VK_ACCESS_INPUT_ATTACHMENT_READ_BIT;

// Used for depth/stencil attachments
VkPipelineStageFlags depthStencilAttachmentStages = VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT | VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT;
VkAccessFlags depthStencilAttachmentAccess = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT;

VkSubpassDependency implicitDependency = {
    .srcSubpass = firstSubpass;
    .dstSubpass = secondSubpass;
    .srcStageMask = inputAttachmentStages | depthStencilAttachmentStages;
    .dstStageMask = inputAttachmentStages | depthStencilAttachmentStages;
    .srcAccessMask = inputAttachmentAccess | depthStencilAttachmentAccess;
    .dstAccessMask = inputAttachmentAccess | depthStencilAttachmentAccess;
    .dependencyFlags = 0;
};

If a subpass uses the same attachment as both an input attachment and either a color attachment or a depth/stencil attachment, writes via the color or depth/stencil attachment are not automatically made visible to reads via the input attachment, causing a feedback loop, except in any of the following conditions:

  • If the color components or depth/stencil components read by the input attachment are mutually exclusive with the components written by the color or depth/stencil attachments, then there is no feedback loop. This requires the graphics pipelines used by the subpass to disable writes to color components that are read as inputs via the colorWriteMask, and to disable writes to depth/stencil components that are read as inputs via depthWriteEnable or stencilTestEnable.

  • If the attachment is used as an input attachment and depth/stencil attachment only, and the depth/stencil attachment is not written to.

  • If a memory dependency is inserted between when the attachment is written and when it is subsequently read by later fragments. Pipeline barriers expressing a subpass self-dependency are the only way to achieve this, and one must be inserted every time a fragment will read values at a particular sample (x, y, layer, sample) coordinate, if those values have been written since the most recent pipeline barrier; or the since start of the subpass if there have been no pipeline barriers since the start of the subpass.

An attachment used as both an input attachment and a color attachment must be in the VK_IMAGE_LAYOUT_SHARED_PRESENT_KHR or VK_IMAGE_LAYOUT_GENERAL layout. An attachment used as an input attachment and depth/stencil attachment must be in the VK_IMAGE_LAYOUT_SHARED_PRESENT_KHR, VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL, or VK_IMAGE_LAYOUT_GENERAL layout. An attachment must not be used as both a depth/stencil attachment and a color attachment.

To destroy a render pass, call:

void vkDestroyRenderPass(
    VkDevice                                    device,
    VkRenderPass                                renderPass,
    const VkAllocationCallbacks*                pAllocator);
  • device is the logical device that destroys the render pass.

  • renderPass is the handle of the render pass to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

Valid Usage
  • All submitted commands that refer to renderPass must have completed execution

  • If VkAllocationCallbacks were provided when renderPass was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when renderPass was created, pAllocator must be NULL

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • If renderPass is not VK_NULL_HANDLE, renderPass must be a valid VkRenderPass handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • If renderPass is a valid handle, it must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to renderPass must be externally synchronized

7.2. Render Pass Compatibility

Framebuffers and graphics pipelines are created based on a specific render pass object. They must only be used with that render pass object, or one compatible with it.

Two attachment references are compatible if they have matching format and sample count, or are both VK_ATTACHMENT_UNUSED or the pointer that would contain the reference is NULL.

Two arrays of attachment references are compatible if all corresponding pairs of attachments are compatible. If the arrays are of different lengths, attachment references not present in the smaller array are treated as VK_ATTACHMENT_UNUSED.

Two render passes are compatible if their corresponding color, input, resolve, and depth/stencil attachment references are compatible and if they are otherwise identical except for:

  • Initial and final image layout in attachment descriptions

  • Load and store operations in attachment descriptions

  • Image layout in attachment references

A framebuffer is compatible with a render pass if it was created using the same render pass or a compatible render pass.

7.3. Framebuffers

Render passes operate in conjunction with framebuffers. Framebuffers represent a collection of specific memory attachments that a render pass instance uses.

Framebuffers are represented by VkFramebuffer handles:

VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkFramebuffer)

To create a framebuffer, call:

VkResult vkCreateFramebuffer(
    VkDevice                                    device,
    const VkFramebufferCreateInfo*              pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkFramebuffer*                              pFramebuffer);
  • device is the logical device that creates the framebuffer.

  • pCreateInfo points to a VkFramebufferCreateInfo structure which describes additional information about framebuffer creation.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pFramebuffer points to a VkFramebuffer handle in which the resulting framebuffer object is returned.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pCreateInfo must be a pointer to a valid VkFramebufferCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pFramebuffer must be a pointer to a VkFramebuffer handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

The VkFramebufferCreateInfo structure is defined as:

typedef struct VkFramebufferCreateInfo {
    VkStructureType             sType;
    const void*                 pNext;
    VkFramebufferCreateFlags    flags;
    VkRenderPass                renderPass;
    uint32_t                    attachmentCount;
    const VkImageView*          pAttachments;
    uint32_t                    width;
    uint32_t                    height;
    uint32_t                    layers;
} VkFramebufferCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is reserved for future use.

  • renderPass is a render pass that defines what render passes the framebuffer will be compatible with. See Render Pass Compatibility for details.

  • attachmentCount is the number of attachments.

  • pAttachments is an array of VkImageView handles, each of which will be used as the corresponding attachment in a render pass instance.

  • width, height and layers define the dimensions of the framebuffer. If the render pass uses multiview, then layers must be one and each attachment requires a number of layers that is greater than the maximum bit index set in the view mask in the subpasses in which it is used.

Image subresources used as attachments must not be used via any non-attachment usage for the duration of a render pass instance.

Note

This restriction means that the render pass has full knowledge of all uses of all of the attachments, so that the implementation is able to make correct decisions about when and how to perform layout transitions, when to overlap execution of subpasses, etc.

It is legal for a subpass to use no color or depth/stencil attachments, and rather use shader side effects such as image stores and atomics to produce an output. In this case, the subpass continues to use the width, height, and layers of the framebuffer to define the dimensions of the rendering area, and the rasterizationSamples from each pipeline’s VkPipelineMultisampleStateCreateInfo to define the number of samples used in rasterization; however, if VkPhysicalDeviceFeatures::variableMultisampleRate is VK_FALSE, then all pipelines to be bound with a given zero-attachment subpass must have the same value for VkPipelineMultisampleStateCreateInfo::rasterizationSamples.

Valid Usage
  • attachmentCount must be equal to the attachment count specified in renderPass

  • Any given element of pAttachments that is used as a color attachment or resolve attachment by renderPass must have been created with a usage value including VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT

  • Any given element of pAttachments that is used as a depth/stencil attachment by renderPass must have been created with a usage value including VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT

  • Any given element of pAttachments that is used as an input attachment by renderPass must have been created with a usage value including VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT

  • Any given element of pAttachments must have been created with an VkFormat value that matches the VkFormat specified by the corresponding VkAttachmentDescription in renderPass

  • Any given element of pAttachments must have been created with a samples value that matches the samples value specified by the corresponding VkAttachmentDescription in renderPass

  • Any given element of pAttachments must have dimensions at least as large as the corresponding framebuffer dimension

  • Any given element of pAttachments must only specify a single mip level

  • Any given element of pAttachments must have been created with the identity swizzle

  • width must be greater than 0.

  • width must be less than or equal to VkPhysicalDeviceLimits::maxFramebufferWidth

  • height must be greater than 0.

  • height must be less than or equal to VkPhysicalDeviceLimits::maxFramebufferHeight

  • layers must be greater than 0.

  • layers must be less than or equal to VkPhysicalDeviceLimits::maxFramebufferLayers

  • Any given element of pAttachments that is a 2D or 2D array image view taken from a 3D image must not be a depth/stencil format

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO

  • pNext must be NULL

  • flags must be 0

  • renderPass must be a valid VkRenderPass handle

  • If attachmentCount is not 0, pAttachments must be a pointer to an array of attachmentCount valid VkImageView handles

  • Both of renderPass, and the elements of pAttachments that are valid handles must have been created, allocated, or retrieved from the same VkDevice

To destroy a framebuffer, call:

void vkDestroyFramebuffer(
    VkDevice                                    device,
    VkFramebuffer                               framebuffer,
    const VkAllocationCallbacks*                pAllocator);
  • device is the logical device that destroys the framebuffer.

  • framebuffer is the handle of the framebuffer to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

Valid Usage
  • All submitted commands that refer to framebuffer must have completed execution

  • If VkAllocationCallbacks were provided when framebuffer was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when framebuffer was created, pAllocator must be NULL

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • If framebuffer is not VK_NULL_HANDLE, framebuffer must be a valid VkFramebuffer handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • If framebuffer is a valid handle, it must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to framebuffer must be externally synchronized

7.4. Render Pass Commands

An application records the commands for a render pass instance one subpass at a time, by beginning a render pass instance, iterating over the subpasses to record commands for that subpass, and then ending the render pass instance.

To begin a render pass instance, call:

void vkCmdBeginRenderPass(
    VkCommandBuffer                             commandBuffer,
    const VkRenderPassBeginInfo*                pRenderPassBegin,
    VkSubpassContents                           contents);
  • commandBuffer is the command buffer in which to record the command.

  • pRenderPassBegin is a pointer to a VkRenderPassBeginInfo structure (defined below) which indicates the render pass to begin an instance of, and the framebuffer the instance uses.

  • contents specifies how the commands in the first subpass will be provided, and is one of the values:

    typedef enum VkSubpassContents {
        VK_SUBPASS_CONTENTS_INLINE = 0,
        VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS = 1,
    } VkSubpassContents;

    If contents is VK_SUBPASS_CONTENTS_INLINE, the contents of the subpass will be recorded inline in the primary command buffer, and secondary command buffers must not be executed within the subpass. If contents is VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS, the contents are recorded in secondary command buffers that will be called from the primary command buffer, and vkCmdExecuteCommands is the only valid command on the command buffer until vkCmdNextSubpass or vkCmdEndRenderPass.

After beginning a render pass instance, the command buffer is ready to record the commands for the first subpass of that render pass.

Valid Usage
  • If any of the initialLayout or finalLayout member of the VkAttachmentDescription structures or the layout member of the VkAttachmentReference structures specified when creating the render pass specified in the renderPass member of pRenderPassBegin is VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL then the corresponding attachment image subresource of the framebuffer specified in the framebuffer member of pRenderPassBegin must have been created with VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT set

  • If any of the initialLayout or finalLayout member of the VkAttachmentDescription structures or the layout member of the VkAttachmentReference structures specified when creating the render pass specified in the renderPass member of pRenderPassBegin is VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL or VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL then the corresponding attachment image subresource of the framebuffer specified in the framebuffer member of pRenderPassBegin must have been created with VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT set

  • If any of the initialLayout or finalLayout member of the VkAttachmentDescription structures or the layout member of the VkAttachmentReference structures specified when creating the render pass specified in the renderPass member of pRenderPassBegin is VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL then the corresponding attachment image subresource of the framebuffer specified in the framebuffer member of pRenderPassBegin must have been created with VK_IMAGE_USAGE_SAMPLED_BIT or VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT set

  • If any of the initialLayout or finalLayout member of the VkAttachmentDescription structures or the layout member of the VkAttachmentReference structures specified when creating the render pass specified in the renderPass member of pRenderPassBegin is VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL then the corresponding attachment image subresource of the framebuffer specified in the framebuffer member of pRenderPassBegin must have been created with VK_IMAGE_USAGE_TRANSFER_SRC_BIT set

  • If any of the initialLayout or finalLayout member of the VkAttachmentDescription structures or the layout member of the VkAttachmentReference structures specified when creating the render pass specified in the renderPass member of pRenderPassBegin is VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL then the corresponding attachment image subresource of the framebuffer specified in the framebuffer member of pRenderPassBegin must have been created with VK_IMAGE_USAGE_TRANSFER_DST_BIT set

  • If any of the initialLayout members of the VkAttachmentDescription structures specified when creating the render pass specified in the renderPass member of pRenderPassBegin is not VK_IMAGE_LAYOUT_UNDEFINED, then each such initialLayout must be equal to the current layout of the corresponding attachment image subresource of the framebuffer specified in the framebuffer member of pRenderPassBegin

  • The srcStageMask and dstStageMask members of any element of the pDependencies member of VkRenderPassCreateInfo used to create renderpass must be supported by the capabilities of the queue family identified by the queueFamilyIndex member of the VkCommandPoolCreateInfo used to create the command pool which commandBuffer was allocated from.

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • pRenderPassBegin must be a pointer to a valid VkRenderPassBeginInfo structure

  • contents must be a valid VkSubpassContents value

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support graphics operations

  • This command must only be called outside of a render pass instance

  • commandBuffer must be a primary VkCommandBuffer

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary

Outside

Graphics

Graphics

The VkRenderPassBeginInfo structure is defined as:

typedef struct VkRenderPassBeginInfo {
    VkStructureType        sType;
    const void*            pNext;
    VkRenderPass           renderPass;
    VkFramebuffer          framebuffer;
    VkRect2D               renderArea;
    uint32_t               clearValueCount;
    const VkClearValue*    pClearValues;
} VkRenderPassBeginInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • renderPass is the render pass to begin an instance of.

  • framebuffer is the framebuffer containing the attachments that are used with the render pass.

  • renderArea is the render area that is affected by the render pass instance, and is described in more detail below.

  • clearValueCount is the number of elements in pClearValues.

  • pClearValues is an array of VkClearValue structures that contains clear values for each attachment, if the attachment uses a loadOp value of VK_ATTACHMENT_LOAD_OP_CLEAR or if the attachment has a depth/stencil format and uses a stencilLoadOp value of VK_ATTACHMENT_LOAD_OP_CLEAR. The array is indexed by attachment number. Only elements corresponding to cleared attachments are used. Other elements of pClearValues are ignored.

renderArea is the render area that is affected by the render pass instance. The effects of attachment load, store and multisample resolve operations are restricted to the pixels whose x and y coordinates fall within the render area on all attachments. The render area extends to all layers of framebuffer. The application must ensure (using scissor if necessary) that all rendering is contained within the render area, otherwise the pixels outside of the render area become undefined and shader side effects may occur for fragments outside the render area. The render area must be contained within the framebuffer dimensions.

When multiview is enabled, the resolve operation at the end of a subpass applies to all views in the view mask.

Note

There may be a performance cost for using a render area smaller than the framebuffer, unless it matches the render area granularity for the render pass.

Valid Usage
  • clearValueCount must be greater than the largest attachment index in renderPass that specifies a loadOp (or stencilLoadOp, if the attachment has a depth/stencil format) of VK_ATTACHMENT_LOAD_OP_CLEAR

  • If clearValueCount is not 0, pClearValues must be a pointer to an array of clearValueCount valid VkClearValue unions

  • renderPass must be compatible with the renderPass member of the VkFramebufferCreateInfo structure specified when creating framebuffer.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO

  • pNext must be NULL or a pointer to a valid instance of VkDeviceGroupRenderPassBeginInfoKHX

  • renderPass must be a valid VkRenderPass handle

  • framebuffer must be a valid VkFramebuffer handle

  • Both of framebuffer, and renderPass must have been created, allocated, or retrieved from the same VkDevice

If the pNext list of VkRenderPassBeginInfo includes a VkDeviceGroupRenderPassBeginInfoKHX structure, then that structure includes a device mask and set of render areas for the render pass instance.

The VkDeviceGroupRenderPassBeginInfoKHX structure is defined as:

typedef struct VkDeviceGroupRenderPassBeginInfoKHX {
    VkStructureType    sType;
    const void*        pNext;
    uint32_t           deviceMask;
    uint32_t           deviceRenderAreaCount;
    const VkRect2D*    pDeviceRenderAreas;
} VkDeviceGroupRenderPassBeginInfoKHX;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • deviceMask is the device mask for the render pass instance.

  • deviceRenderAreaCount is the number of elements in the pDeviceRenderAreas array.

  • pDeviceRenderAreas is an array of structures of type VkRect2D defining the render area for each physical device.

The deviceMask serves several purposes. It is an upper bound on the set of physical devices that can be used during the render pass instance, and the initial device mask when the render pass instance begins. Render pass attachment load, store, and resolve operations only apply to physical devices included in the device mask. Subpass dependencies only apply to the physical devices in the device mask.

If deviceRenderAreaCount is not zero, then the elements of pDeviceRenderAreas override the value of VkRenderPassBeginInfo::renderArea, and provide a render area specific to each physical device. These render areas serve the same purpose as VkRenderPassBeginInfo::renderArea, including controlling the region of attachments that are cleared by VK_ATTACHMENT_LOAD_OP_CLEAR and that are resolved into resolve attachments.

If this structure is not present, the render pass instance’s device mask is the value of VkDeviceGroupCommandBufferBeginInfoKHX::deviceMask. If this structure is not present or if deviceRenderAreaCount is zero, VkRenderPassBeginInfo::renderArea is used for all physical devices.

Valid Usage
  • deviceMask must be a valid device mask value

  • deviceMask must not be zero

  • deviceMask must be a subset of the command buffer’s initial device mask

  • deviceRenderAreaCount must either be zero or equal to the number of physical devices in the logical device.

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_DEVICE_GROUP_RENDER_PASS_BEGIN_INFO_KHX

  • If deviceRenderAreaCount is not 0, pDeviceRenderAreas must be a pointer to an array of deviceRenderAreaCount VkRect2D structures

To query the render area granularity, call:

void vkGetRenderAreaGranularity(
    VkDevice                                    device,
    VkRenderPass                                renderPass,
    VkExtent2D*                                 pGranularity);
  • device is the logical device that owns the render pass.

  • renderPass is a handle to a render pass.

  • pGranularity points to a VkExtent2D structure in which the granularity is returned.

The conditions leading to an optimal renderArea are:

  • the offset.x member in renderArea is a multiple of the width member of the returned VkExtent2D (the horizontal granularity).

  • the offset.y member in renderArea is a multiple of the height of the returned VkExtent2D (the vertical granularity).

  • either the offset.width member in renderArea is a multiple of the horizontal granularity or offset.x+offset.width is equal to the width of the framebuffer in the VkRenderPassBeginInfo.

  • either the offset.height member in renderArea is a multiple of the vertical granularity or offset.y+offset.height is equal to the height of the framebuffer in the VkRenderPassBeginInfo.

Subpass dependencies are not affected by the render area, and apply to the entire image subresources attached to the framebuffer. Similarly, pipeline barriers are valid even if their effect extends outside the render area.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • renderPass must be a valid VkRenderPass handle

  • pGranularity must be a pointer to a VkExtent2D structure

  • renderPass must have been created, allocated, or retrieved from device

To transition to the next subpass in the render pass instance after recording the commands for a subpass, call:

void vkCmdNextSubpass(
    VkCommandBuffer                             commandBuffer,
    VkSubpassContents                           contents);
  • commandBuffer is the command buffer in which to record the command.

  • contents specifies how the commands in the next subpass will be provided, in the same fashion as the corresponding parameter of vkCmdBeginRenderPass.

The subpass index for a render pass begins at zero when vkCmdBeginRenderPass is recorded, and increments each time vkCmdNextSubpass is recorded.

Moving to the next subpass automatically performs any multisample resolve operations in the subpass being ended. End-of-subpass multisample resolves are treated as color attachment writes for the purposes of synchronization. That is, they are considered to execute in the VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT pipeline stage and their writes are synchronized with VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT. Synchronization between rendering within a subpass and any resolve operations at the end of the subpass occurs automatically, without need for explicit dependencies or pipeline barriers. However, if the resolve attachment is also used in a different subpass, an explicit dependency is needed.

After transitioning to the next subpass, the application can record the commands for that subpass.

Valid Usage
  • The current subpass index must be less than the number of subpasses in the render pass minus one

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • contents must be a valid VkSubpassContents value

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support graphics operations

  • This command must only be called inside of a render pass instance

  • commandBuffer must be a primary VkCommandBuffer

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary

Inside

Graphics

Graphics

To record a command to end a render pass instance after recording the commands for the last subpass, call:

void vkCmdEndRenderPass(
    VkCommandBuffer                             commandBuffer);
  • commandBuffer is the command buffer in which to end the current render pass instance.

Ending a render pass instance performs any multisample resolve operations on the final subpass.

Valid Usage
  • The current subpass index must be equal to the number of subpasses in the render pass minus one

Valid Usage (Implicit)
  • commandBuffer must be a valid VkCommandBuffer handle

  • commandBuffer must be in the recording state

  • The VkCommandPool that commandBuffer was allocated from must support graphics operations

  • This command must only be called inside of a render pass instance

  • commandBuffer must be a primary VkCommandBuffer

Host Synchronization
  • Host access to commandBuffer must be externally synchronized

  • Host access to the VkCommandPool that commandBuffer was allocated from must be externally synchronized

Command Properties
Command Buffer Levels Render Pass Scope Supported Queue Types Pipeline Type

Primary

Inside

Graphics

Graphics

8. Shaders

A shader specifies programmable operations that execute for each vertex, control point, tessellated vertex, primitive, fragment, or workgroup in the corresponding stage(s) of the graphics and compute pipelines.

Graphics pipelines include vertex shader execution as a result of primitive assembly, followed, if enabled, by tessellation control and evaluation shaders operating on patches, geometry shaders, if enabled, operating on primitives, and fragment shaders, if present, operating on fragments generated by Rasterization. In this specification, vertex, tessellation control, tessellation evaluation and geometry shaders are collectively referred to as vertex processing stages and occur in the logical pipeline before rasterization. The fragment shader occurs logically after rasterization.

Only the compute shader stage is included in a compute pipeline. Compute shaders operate on compute invocations in a workgroup.

Shaders can read from input variables, and read from and write to output variables. Input and output variables can be used to transfer data between shader stages, or to allow the shader to interact with values that exist in the execution environment. Similarly, the execution environment provides constants that describe capabilities.

Shader variables are associated with execution environment-provided inputs and outputs using built-in decorations in the shader. The available decorations for each stage are documented in the following subsections.

8.1. Shader Modules

Shader modules contain shader code and one or more entry points. Shaders are selected from a shader module by specifying an entry point as part of pipeline creation. The stages of a pipeline can use shaders that come from different modules. The shader code defining a shader module must be in the SPIR-V format, as described by the Vulkan Environment for SPIR-V appendix.

Shader modules are represented by VkShaderModule handles:

VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkShaderModule)

To create a shader module, call:

VkResult vkCreateShaderModule(
    VkDevice                                    device,
    const VkShaderModuleCreateInfo*             pCreateInfo,
    const VkAllocationCallbacks*                pAllocator,
    VkShaderModule*                             pShaderModule);
  • device is the logical device that creates the shader module.

  • pCreateInfo parameter is a pointer to an instance of the VkShaderModuleCreateInfo structure.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

  • pShaderModule points to a VkShaderModule handle in which the resulting shader module object is returned.

Once a shader module has been created, any entry points it contains can be used in pipeline shader stages as described in Compute Pipelines and Graphics Pipelines.

If the shader stage fails to compile VK_ERROR_INVALID_SHADER_NV will be generated and the compile log will be reported back to the application by VK_EXT_debug_report if enabled.

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • pCreateInfo must be a pointer to a valid VkShaderModuleCreateInfo structure

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • pShaderModule must be a pointer to a VkShaderModule handle

Return Codes
Success
  • VK_SUCCESS

Failure
  • VK_ERROR_OUT_OF_HOST_MEMORY

  • VK_ERROR_OUT_OF_DEVICE_MEMORY

  • VK_ERROR_INVALID_SHADER_NV

The VkShaderModuleCreateInfo structure is defined as:

typedef struct VkShaderModuleCreateInfo {
    VkStructureType              sType;
    const void*                  pNext;
    VkShaderModuleCreateFlags    flags;
    size_t                       codeSize;
    const uint32_t*              pCode;
} VkShaderModuleCreateInfo;
  • sType is the type of this structure.

  • pNext is NULL or a pointer to an extension-specific structure.

  • flags is reserved for future use.

  • codeSize is the size, in bytes, of the code pointed to by pCode.

  • pCode points to code that is used to create the shader module. The type and format of the code is determined from the content of the memory addressed by pCode.

Valid Usage
  • codeSize must be greater than 0

  • If pCode points to SPIR-V code, codeSize must be a multiple of 4

  • pCode must point to either valid SPIR-V code, formatted and packed as described by the Khronos SPIR-V Specification or valid GLSL code which must be written to the GL_KHR_vulkan_glsl extension specification

  • If pCode points to SPIR-V code, that code must adhere to the validation rules described by the Validation Rules within a Module section of the SPIR-V Environment appendix

  • If pCode points to GLSL code, it must be valid GLSL code written to the GL_KHR_vulkan_glsl GLSL extension specification

  • pCode must declare the Shader capability for SPIR-V code

  • pCode must not declare any capability that is not supported by the API, as described by the Capabilities section of the SPIR-V Environment appendix

  • If pCode declares any of the capabilities that are listed as not required by the implementation, the relevant feature must be enabled, as listed in the SPIR-V Environment appendix

Valid Usage (Implicit)
  • sType must be VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO

  • pNext must be NULL

  • flags must be 0

  • pCode must be a pointer to an array of \(codeSize \over 4\) uint32_t values

To destroy a shader module, call:

void vkDestroyShaderModule(
    VkDevice                                    device,
    VkShaderModule                              shaderModule,
    const VkAllocationCallbacks*                pAllocator);
  • device is the logical device that destroys the shader module.

  • shaderModule is the handle of the shader module to destroy.

  • pAllocator controls host memory allocation as described in the Memory Allocation chapter.

A shader module can be destroyed while pipelines created using its shaders are still in use.

Valid Usage
  • If VkAllocationCallbacks were provided when shaderModule was created, a compatible set of callbacks must be provided here

  • If no VkAllocationCallbacks were provided when shaderModule was created, pAllocator must be NULL

Valid Usage (Implicit)
  • device must be a valid VkDevice handle

  • If shaderModule is not VK_NULL_HANDLE, shaderModule must be a valid VkShaderModule handle

  • If pAllocator is not NULL, pAllocator must be a pointer to a valid VkAllocationCallbacks structure

  • If shaderModule is a valid handle, it must have been created, allocated, or retrieved from device

Host Synchronization
  • Host access to shaderModule must be externally synchronized

8.2. Shader Execution

At each stage of the pipeline, multiple invocations of a shader may execute simultaneously. Further, invocations of a single shader produced as the result of different commands may execute simultaneously. The relative execution order of invocations of the same shader type is undefined. Shader invocations may complete in a different order than that in which the primitives they originated from were drawn or dispatched by the application. However, fragment shader outputs are written to attachments in rasterization order.

The relative order of invocations of different shader types is largely undefined. However, when invoking a shader whose inputs are generated from a previous pipeline stage, the shader invocations from the previous stage are guaranteed to have executed far enough to generate input values for all required inputs.

8.3. Shader Memory Access Ordering

The order in which image or buffer memory is read or written by shaders is largely undefined. For some shader types (vertex, tessellation evaluation, and in some cases, fragment), even the number of shader invocations that may perform loads and stores is undefined.

In particular, the following rules apply:

  • Vertex and tessellation evaluation shaders will be invoked at least once for each unique vertex, as defined in those sections.

  • Fragment shaders will be invoked zero or more times, as defined in that section.

  • The relative order of invocations of the same shader type are undefined. A store issued by a shader when working on primitive B might complete prior to a store for primitive A, even if primitive A is specified prior to primitive B. This applies even to fragment shaders; while fragment shader outputs are always written to the framebuffer in rasterization order, stores executed by fragment shader invocations are not.

  • The relative order of invocations of different shader types is largely undefined.

Note

The above limitations on shader invocation order make some forms of synchronization between shader invocations within a single set of primitives unimplementable. For example, having one invocation poll memory written by another invocation assumes that the other invocation has been launched and will complete its writes in finite time.

Stores issued to different memory locations within a single shader invocation may not be visible to other invocations, or may not become visible in the order they were performed.

The OpMemoryBarrier instruction can be used to provide stronger ordering of reads and writes performed by a single invocation. OpMemoryBarrier guarantees that any memory transactions issued by the shader invocation prior to the instruction complete prior to the memory transactions issued after the instruction. Memory barriers are needed for algorithms that require multiple invocations to access the same memory and require the operations to be performed in a partially-defined relative order. For example, if one shader invocation does a series of writes, followed by an OpMemoryBarrier instruction, followed by another write, then the results of the series of writes before the barrier become visible to other shader invocations at a time earlier or equal to when the results of the final write become visible to those invocations. In practice it means that another invocation that sees the results of the final write would also see the previous writes. Without the memory barrier, the final write may be visible before the previous writes.

Writes that are the result of shader stores through a variable decorated with Coherent automatically have available writes to the same buffer, buffer view, or image view made visible to them, and are themselves automatically made available to access by the same buffer, buffer view, or image view. Reads that are the result of shader loads through a variable decorated with Coherent automatically have available writes to the same buffer, buffer view, or image view made visible to them. The order that coherent writes to different locations become available is undefined, unless enforced by a memory barrier instruction or other memory dependency.

Note

Explicit memory dependencies must still be used to guarantee availability and visibility for access via other buffers, buffer views, or image views.

The built-in atomic memory transaction instructions can be used to read and write a given memory address atomically. While built-in atomic functions issued by multiple shader invocations are executed in undefined order relative to each other, these functions perform both a read and a write of a memory address and guarantee that no other memory transaction will write to the underlying memory between the read and write. Atomic operations ensure automatic availability and visibility for writes and reads in the same way as those to Coherent variables.

Example 1. Note

Memory accesses performed on different resource descriptors with the same memory backing may not be well-defined even with the Coherent decoration or via atomics, due to things such as image layouts or ownership of the resource - as described in the Synchronization and Cache Control chapter.

Note

Atomics allow shaders to use shared global addresses for mutual exclusion or as counters, among other uses.

8.4. Shader Inputs and Outputs

Data is passed into and out of shaders using variables with input or output storage class, respectively. User-defined inputs and outputs are connected between stages by matching their Location decorations. Additionally, data can be provided by or communicated to special functions provided by the execution environment using BuiltIn decorations.

In many cases, the same BuiltIn decoration can be used in multiple shader stages with similar meaning. The specific behavior of variables decorated as BuiltIn is documented in the following sections.

8.5. Vertex Shaders

Each vertex shader invocation operates on one vertex and its associated vertex attribute data, and outputs one vertex and associated data. Graphics pipelines must include a vertex shader, and the vertex shader stage is always the first shader stage in the graphics pipeline.

8.5.1. Vertex Shader Execution

A vertex shader must be executed at least once for each vertex specified by a draw command. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view. During execution, the shader is presented with the index of the vertex and instance for which it has been invoked. Input variables declared in the vertex shader are filled by the implementation with the values of vertex attributes associated with the invocation being executed.

If the same vertex is specified multiple times in a draw command (e.g. by including the same index value multiple times in an index buffer) the implementation may reuse the results of vertex shading if it can statically determine that the vertex shader invocations will produce identical results.

Note

It is implementation-dependent when and if results of vertex shading are reused, and thus how many times the vertex shader will be executed. This is true also if the vertex shader contains stores or atomic operations (see vertexPipelineStoresAndAtomics).

8.6. Tessellation Control Shaders

The tessellation control shader is used to read an input patch provided by the application and to produce an output patch. Each tessellation control shader invocation operates on an input patch (after all control points in the patch are processed by a vertex shader) and its associated data, and outputs a single control point of the output patch and its associated data, and can also output additional per-patch data. The input patch is sized according to the patchControlPoints member of VkPipelineTessellationStateCreateInfo, as part of input assembly. The size of the output patch is controlled by the OpExecutionMode OutputVertices specified in the tessellation control or tessellation evaluation shaders, which must be specified in at least one of the shaders. The size of the input and output patches must each be greater than zero and less than or equal to VkPhysicalDeviceLimits::maxTessellationPatchSize.

8.6.1. Tessellation Control Shader Execution

A tessellation control shader is invoked at least once for each output vertex in a patch. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view.

Inputs to the tessellation control shader are generated by the vertex shader. Each invocation of the tessellation control shader can read the attributes of any incoming vertices and their associated data. The invocations corresponding to a given patch execute logically in parallel, with undefined relative execution order. However, the OpControlBarrier instruction can be used to provide limited control of the execution order by synchronizing invocations within a patch, effectively dividing tessellation control shader execution into a set of phases. Tessellation control shaders will read undefined values if one invocation reads a per-vertex or per-patch attribute written by another invocation at any point during the same phase, or if two invocations attempt to write different values to the same per-patch output in a single phase.

8.7. Tessellation Evaluation Shaders

The Tessellation Evaluation Shader operates on an input patch of control points and their associated data, and a single input barycentric coordinate indicating the invocation’s relative position within the subdivided patch, and outputs a single vertex and its associated data.

8.7.1. Tessellation Evaluation Shader Execution

A tessellation evaluation shader is invoked at least once for each unique vertex generated by the tessellator. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view.

8.8. Geometry Shaders

The geometry shader operates on a group of vertices and their associated data assembled from a single input primitive, and emits zero or more output primitives and the group of vertices and their associated data required for each output primitive.

8.8.1. Geometry Shader Execution

A geometry shader is invoked at least once for each primitive produced by the tessellation stages, or at least once for each primitive generated by primitive assembly when tessellation is not in use. The number of geometry shader invocations per input primitive is determined from the invocation count of the geometry shader specified by the OpExecutionMode Invocations in the geometry shader. If the invocation count is not specified, then a default of one invocation is executed. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view.

8.9. Fragment Shaders

Fragment shaders are invoked as the result of rasterization in a graphics pipeline. Each fragment shader invocation operates on a single fragment and its associated data. With few exceptions, fragment shaders do not have access to any data associated with other fragments and are considered to execute in isolation of fragment shader invocations associated with other fragments.

8.9.1. Fragment Shader Execution

For each fragment generated by rasterization, a fragment shader may be invoked. A fragment shader must not be invoked if the Early Per-Fragment Tests cause it to have no coverage. If the subpass includes multiple views in its view mask, the shader may be invoked separately for each view.

Furthermore, if it is determined that a fragment generated as the result of rasterizing a first primitive will have its outputs entirely overwritten by a fragment generated as the result of rasterizing a second primitive in the same subpass, and the fragment shader used for the fragment has no other side effects, then the fragment shader may not be executed for the fragment from the first primitive.

Relative ordering of execution of different fragment shader invocations is not defined.

The number of fragment shader invocations produced per-pixel is determined as follows:

  • If per-sample shading is enabled, the fragment shader is invoked once per covered sample.

  • Otherwise, the fragment shader is invoked at least once per fragment but no more than once per covered sample.

In addition to the conditions outlined above for the invocation of a fragment shader, a fragment shader invocation may be produced as a helper invocation. A helper invocation is a fragment shader invocation that is created solely for the purposes of evaluating derivatives for use in non-helper fragment shader invocations. Stores and atomics performed by helper invocations must not have any effect on memory, and values returned by atomic instructions in helper invocations are undefined.

8.9.2. Early Fragment Tests

An explicit control is provided to allow fragment shaders to enable early fragment tests. If the fragment shader specifies the EarlyFragmentTests OpExecutionMode, the per-fragment tests described in Early Fragment Test Mode are performed prior to fragment shader execution. Otherwise, they are performed after fragment shader execution.

8.10. Compute Shaders

Compute shaders are invoked via vkCmdDispatch and vkCmdDispatchIndirect commands. In general, they have access to similar resources as shader stages executing as part of a graphics pipeline.

Compute workloads are formed from groups of work items called workgroups and processed by the compute shader in the current compute pipeline. A workgroup is a collection of shader invocations that execute the same shader, potentially in parallel. Compute shaders execute in global workgroups which are divided into a number of local workgroups with a size that can be set by assigning a value to the LocalSize execution mode or via an object decorated by the WorkgroupSize decoration. An invocation within a local workgroup can share data with other members of the local workgroup through shared variables and issue memory and control flow barriers to synchronize with other members of the local workgroup.

8.11. Interpolation Decorations

Interpolation decorations control the behavior of attribute interpolation in the fragment shader stage. Interpolation decorations can be applied to Input storage class variables in the fragment shader stage’s interface, and control the interpolation behavior of those variables.

Inputs that could be interpolated can be decorated by at most one of the following decorations:

  • Flat: no interpolation

  • NoPerspective: linear interpolation (for lines and polygons).

Fragment input variables decorated with neither Flat nor NoPerspective use perspective-correct interpolation (for lines and polygons).

The presence of and type of interpolation is controlled by the above interpolation decorations as well as the auxiliary decorations Centroid and Sample.

A variable decorated with Flat will not be interpolated. Instead, it will have the same value for every fragment within a triangle. This value will come from a single provoking vertex. A variable decorated with Flat can also be decorated with Centroid or Sample, which will mean the same thing as decorating it only as Flat.

For fragment shader input variables decorated with neither Centroid nor Sample, the assigned variable may be interpolated anywhere within the pixel and a single value may be assigned to each sample within the pixel.

Centroid and Sample can be used to control the location and frequency of the sampling of the decorated fragment shader input. If a fragment shader input is decorated with Centroid, a single value may be assigned to that variable for all samples in the pixel, but that value must be interpolated to a location that lies in both the pixel and in the primitive being rendered, including any of the pixel’s samples covered by the primitive. Because the location at which the variable is interpolated may be different in neighboring pixels, and derivatives may be computed by computing differences between neighboring pixels, derivatives of centroid-sampled inputs may be less accurate than those for non-centroid interpolated variables. If a fragment shader input is decorated with Sample, a separate value must be assigned to that variable for each covered sample in the pixel, and that value must be sampled at the location of the individual sample. When rasterizationSamples is VK_SAMPLE_COUNT_1_BIT, the pixel center must be used for Centroid, Sample, and undecorated attribute interpolation.

Fragment shader inputs that are signed or unsigned integers, integer vectors, or any double-precision floating-point type must be decorated with Flat.

When the VK_AMD_shader_explicit_vertex_parameter device extension is enabled inputs can be also decorated with the CustomInterpAMD interpolation decoration, including fragment shader inputs that are signed or unsigned integers, integer vectors, or any double-precision floating-point type. Inputs decorated with CustomInterpAMD can only be accessed by the extended instruction InterpolateAtVertexAMD and allows accessing the value of the input for individual vertices of the primitive.

8.12. Static Use

A SPIR-V module declares a global object in memory using the OpVariable instruction, which results in a pointer x to that object. A specific entry point in a SPIR-V module is said to statically use that object if that entry point’s call tree contains a function that contains a memory instruction or image instruction with x as an id operand. See the “Memory Instructions” and “Image Instructions” subsections of section 3 “Binary Form” of the SPIR-V specification for the complete list of SPIR-V memory instructions.

Static use is not used to control the behavior of variables with Input and Output storage. The effects of those variables are applied based only on whether they are present in a shader entry point’s interface.

8.13. Invocation and Derivative Groups

An invocation group (see the subsection “Control Flow” of section 2 of the SPIR-V specification) for a compute shader is the set of invocations in a single local workgroup. For graphics shaders, an invocation group is an implementation-dependent subset of the set of shader invocations of a given shader stage which are produced by a single drawing command. For indirect drawing commands with drawCount greater than one, invocations from separate draws are in distinct invocation groups.

Note

Because the partitioning of invocations into invocation groups is implementation-dependent and not observable, applications generally need to assume the worst case of all invocations in a draw belonging to a single invocation group.

A derivative group (see the subsection “Control Flow” of section 2 of the SPIR-V 1.00 Revision 4 specification) for a fragment shader is the set of invocations generated by a single primitive (point, line, or triangle), including any helper invocations generated by that primitive. Derivatives are undefined for a sampled image instruction if the instruction is in flow control that is not uniform across the derivative group.

9. Pipelines

The following figure shows a block diagram of the Vulkan pipelines. Some Vulkan commands specify geometric objects to be drawn or computational work to be performed, while others specify state controlling how objects are handled by the various pipeline stages, or control data transfer between memory organized as images and buffers. Commands are effectively sent through a processing pipeline, either a graphics pipeline or a compute pipeline.

The first stage of the graphics pipeline (Input Assembler) assembles vertices to form geometric primitives such as points, lines, and triangles, based on a requested primitive topology. In the next stage (Vertex Shader) vertices can be transformed, computing positions and attributes for each vertex. If tessellation and/or geometry shaders are supported, they can then generate multiple primitives from a single input primitive, possibly changing the primitive topology or generating additional attribute data in the process.

The final resulting primitives are clipped to a clip volume in preparation for the next stage, Rasterization. The rasterizer produces a series of framebuffer addresses and values using a two-dimensional description of a point, line segment, or triangle. Each fragment so produced is fed to the next stage (Fragment Shader) that performs operations on individual fragments before they finally alter the framebuffer. These operations include conditional updates into the framebuffer based on incoming and previously stored depth values (to effect depth buffering), blending of incoming fragment colors with stored colors, as well as masking, stenciling, and other logical operations on fragment values.

Framebuffer operations read and write the color and depth/stencil attachments of the framebuffer for a given subpass of a render pass instance. The attachments can be used as input attachments in the fragment shader in a later subpass of the same render pass.

The compute pipeline is a separate pipeline from the graphics pipeline, which operates on one-, two-, or three-dimensional workgroups which can read from and write to buffer and image memory.

This ordering is meant only as a tool for describing Vulkan, not as a strict rule of how Vulkan is implemented, and we present it only as a means to organize the various operations of the pipelines. Actual ordering guarantees between pipeline stages are explained in detail in the synchronization chapter.