1. Preamble
Copyright 2014-2022 The Khronos Group Inc.
This Specification is protected by copyright laws and contains material proprietary to Khronos. Except as described by these terms, it or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast or otherwise exploited in any manner without the express prior written permission of Khronos. Khronos grants a conditional copyright license to use and reproduce the unmodified Specification for any purpose, without fee or royalty, EXCEPT no licenses to any patent, trademark or other intellectual property rights are granted under these terms.
Khronos makes no, and expressly disclaims any, representations or warranties, express or implied, regarding this Specification, including, without limitation: merchantability, fitness for a particular purpose, non-infringement of any intellectual property, correctness, accuracy, completeness, timeliness, and reliability. Under no circumstances will Khronos, or any of its Promoters, Contributors or Members, or their respective partners, officers, directors, employees, agents or representatives be liable for any damages, whether direct, indirect, special or consequential damages for lost revenues, lost profits, or otherwise, arising from or in connection with these materials.
This Specification has been created under the Khronos Intellectual Property Rights Policy, which is Attachment A of the Khronos Group Membership Agreement available at https://www.khronos.org/files/member_agreement.pdf, and which defines the terms 'Scope', 'Compliant Portion', and 'Necessary Patent Claims'. Parties desiring to implement the Specification and make use of Khronos trademarks in relation to that implementation, and receive reciprocal patent license protection under the Khronos Intellectual Property Rights Policy must become Adopters and confirm the implementation as conformant under the process defined by Khronos for this Specification; see https://www.khronos.org/adopters.
This Specification contains substantially unmodified functionality from, and is a successor to, Khronos specifications including OpenGL, OpenGL ES and OpenCL.
Some parts of this Specification are purely informative and so are EXCLUDED from the Scope of this Specification. The Document Conventions section of the Introduction defines how these parts of the Specification are identified.
Where this Specification uses technical terminology, defined in the Glossary or otherwise, that refer to enabling technologies that are not expressly set forth in this Specification, those enabling technologies are EXCLUDED from the Scope of this Specification. For clarity, enabling technologies not disclosed with particularity in this Specification (e.g. semiconductor manufacturing technology, hardware architecture, processor architecture or microarchitecture, memory architecture, compiler technology, object oriented technology, basic operating system technology, compression technology, algorithms, and so on) are NOT to be considered expressly set forth; only those application program interfaces and data structures disclosed with particularity are included in the Scope of this Specification.
For purposes of the Khronos Intellectual Property Rights Policy as it relates to the definition of Necessary Patent Claims, all recommended or optional features, behaviors and functionality set forth in this Specification, if implemented, are considered to be included as Compliant Portions.
Where this Specification includes normative references to external documents, only the specifically identified sections of those external documents are INCLUDED in the Scope of this Specification. If not created by Khronos, those external documents may contain contributions from non-members of Khronos not covered by the Khronos Intellectual Property Rights Policy.
Vulkan and Khronos are registered trademarks of The Khronos Group Inc. ASTC is a trademark of ARM Holdings PLC; OpenCL is a trademark of Apple Inc.; and OpenGL and OpenGL ES are registered trademarks of Hewlett Packard Enterprise, all used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.
2. Introduction
This document, referred to as the “Vulkan Specification” or just the “Specification” hereafter, describes the Vulkan Application Programming Interface (API). Vulkan is a C99 API designed for explicit control of low-level graphics and compute functionality.
The canonical version of the Specification is available in the official Vulkan Registry (https://www.khronos.org/registry/vulkan/). The source files used to generate the Vulkan specification are stored in the Vulkan Documentation Repository (https://github.com/KhronosGroup/Vulkan-Docs). The source repository additionally has a public issue tracker and allows the submission of pull requests that improve the specification.
2.1. Document Conventions
The Vulkan specification is intended for use by both implementors of the API and application developers seeking to make use of the API, forming a contract between these parties. Specification text may address either party; typically the intended audience can be inferred from context, though some sections are defined to address only one of these parties. (For example, Valid Usage sections only address application developers). Any requirements, prohibitions, recommendations or options defined by normative terminology are imposed only on the audience of that text.
Note
Structure and enumerated types defined in extensions that were promoted to core in a later version of Vulkan are now defined in terms of the equivalent Vulkan core interfaces. This affects the Vulkan Specification, the Vulkan header files, and the corresponding XML Registry. |
2.1.1. Informative Language
Some language in the specification is purely informative, intended to give background or suggestions to implementors or developers.
If an entire chapter or section contains only informative language, its title will be suffixed with “(Informative)”.
All NOTEs are implicitly informative.
2.1.2. Normative Terminology
Within this specification, the key words must, required, should, recommended, may, and optional are to be interpreted as described in RFC 2119 - Key words for use in RFCs to Indicate Requirement Levels (https://www.ietf.org/rfc/rfc2119.txt). The additional key word optionally is an alternate form of optional, for use where grammatically appropriate.
These key words are highlighted in the specification for clarity. In text addressing application developers, their use expresses requirements that apply to application behavior. In text addressing implementors, their use expresses requirements that apply to implementations.
In text addressing application developers, the additional key words can and cannot are to be interpreted as describing the capabilities of an application, as follows:
- can
-
This word means that the application is able to perform the action described.
- cannot
-
This word means that the API and/or the execution environment provide no mechanism through which the application can express or accomplish the action described.
These key words are never used in text addressing implementors.
Note
There is an important distinction between cannot and must not, as used in this Specification. Cannot means something the application literally is unable to express or accomplish through the API, while must not means something that the application is capable of expressing through the API, but that the consequences of doing so are undefined and potentially unrecoverable for the implementation (see Valid Usage). |
Unless otherwise noted in the section heading, all sections and appendices in this document are normative.
2.1.3. Technical Terminology
The Vulkan Specification makes use of common engineering and graphics terms such as Pipeline, Shader, and Host to identify and describe Vulkan API constructs and their attributes, states, and behaviors. The Glossary defines the basic meanings of these terms in the context of the Specification. The Specification text provides fuller definitions of the terms and may elaborate, extend, or clarify the Glossary definitions. When a term defined in the Glossary is used in normative language within the Specification, the definitions within the Specification govern and supersede any meanings the terms may have in other technical contexts (i.e. outside the Specification).
2.1.4. Normative References
References to external documents are considered normative references if the Specification uses any of the normative terms defined in Normative Terminology to refer to them or their requirements, either as a whole or in part.
The following documents are referenced by normative sections of the specification:
IEEE. August, 2008. IEEE Standard for Floating-Point Arithmetic. IEEE Std 754-2008. https://dx.doi.org/10.1109/IEEESTD.2008.4610935 .
Andrew Garrard. Khronos Data Format Specification, version 1.3. https://www.khronos.org/registry/DataFormat/specs/1.3/dataformat.1.3.html .
John Kessenich. SPIR-V Extended Instructions for GLSL, Version 1.00 (February 10, 2016). https://www.khronos.org/registry/spir-v/ .
John Kessenich, Boaz Ouriel, and Raun Krisch. SPIR-V Specification, Version 1.5, Revision 3, Unified (April 24, 2020). https://www.khronos.org/registry/spir-v/ .
Jon Leech. The Khronos Vulkan API Registry. https://www.khronos.org/registry/vulkan/specs/1.2/registry.html .
Jon Leech and Tobias Hector. Vulkan Documentation and Extensions: Procedures and Conventions. https://www.khronos.org/registry/vulkan/specs/1.2/styleguide.html .
Architecture of the Vulkan Loader Interfaces (October, 2021). https://github.com/KhronosGroup/Vulkan-Loader/blob/master/docs/LoaderInterfaceArchitecture.md .
3. Fundamentals
This chapter introduces fundamental concepts including the Vulkan architecture and execution model, API syntax, queues, pipeline configurations, numeric representation, state and state queries, and the different types of objects and shaders. It provides a framework for interpreting more specific descriptions of commands and behavior in the remainder of the Specification.
3.1. Host and Device Environment
The Vulkan Specification assumes and requires: the following properties of the host environment with respect to Vulkan implementations:
-
The host must have runtime support for 8, 16, 32 and 64-bit signed and unsigned twos-complement integers, all addressable at the granularity of their size in bytes.
-
The host must have runtime support for 32- and 64-bit floating-point types satisfying the range and precision constraints in the Floating Point Computation section.
-
The representation and endianness of these types on the host must match the representation and endianness of the same types on every physical device supported.
Note
Since a variety of data types and structures in Vulkan may be accessible by both host and physical device operations, the implementation should be able to access such data efficiently in both paths in order to facilitate writing portable and performant applications. |
3.2. Execution Model
This section outlines the execution model of a Vulkan system.
Vulkan exposes one or more devices, each of which exposes one or more queues which may process work asynchronously to one another. The set of queues supported by a device is partitioned into families. Each family supports one or more types of functionality and may contain multiple queues with similar characteristics. Queues within a single family are considered compatible with one another, and work produced for a family of queues can be executed on any queue within that family. This specification defines the following types of functionality that queues may support: graphics, compute, transfer and sparse memory management.
Note
A single device may report multiple similar queue families rather than, or as well as, reporting multiple members of one or more of those families. This indicates that while members of those families have similar capabilities, they are not directly compatible with one another. |
Device memory is explicitly managed by the application. Each device may advertise one or more heaps, representing different areas of memory. Memory heaps are either device-local or host-local, but are always visible to the device. Further detail about memory heaps is exposed via memory types available on that heap. Examples of memory areas that may be available on an implementation include:
-
device-local is memory that is physically connected to the device.
-
device-local, host visible is device-local memory that is visible to the host.
-
host-local, host visible is memory that is local to the host and visible to the device and host.
On other architectures, there may only be a single heap that can be used for any purpose.
3.2.1. Queue Operation
Vulkan queues provide an interface to the execution engines of a device. Commands for these execution engines are recorded into command buffers ahead of execution time, and then submitted to a queue for execution. Once submitted to a queue, command buffers will begin and complete execution without further application intervention, though the order of this execution is dependent on a number of implicit and explicit ordering constraints.
Work is submitted to queues using queue submission commands that typically
take the form vkQueue*
(e.g. vkQueueSubmit,
vkQueueBindSparse), and can take a list of semaphores upon which to
wait before work begins and a list of semaphores to signal once work has
completed.
The work itself, as well as signaling and waiting on the semaphores are all
queue operations.
Queue submission commands return control to the application once queue
operations have been submitted - they do not wait for completion.
There are no implicit ordering constraints between queue operations on different queues, or between queues and the host, so these may operate in any order with respect to each other. Explicit ordering constraints between different queues or with the host can be expressed with semaphores and fences.
Command buffer submissions to a single queue respect submission order and other implicit ordering guarantees, but otherwise may overlap or execute out of order. Other types of batches and queue submissions against a single queue (e.g. sparse memory binding) have no implicit ordering constraints with any other queue submission or batch. Additional explicit ordering constraints between queue submissions and individual batches can be expressed with semaphores and fences.
Before a fence or semaphore is signaled, it is guaranteed that any previously submitted queue operations have completed execution, and that memory writes from those queue operations are available to future queue operations. Waiting on a signaled semaphore or fence guarantees that previous writes that are available are also visible to subsequent commands.
Command buffer boundaries, both between primary command buffers of the same or different batches or submissions as well as between primary and secondary command buffers, do not introduce any additional ordering constraints. In other words, submitting the set of command buffers (which can include executing secondary command buffers) between any semaphore or fence operations execute the recorded commands as if they had all been recorded into a single primary command buffer, except that the current state is reset on each boundary. Explicit ordering constraints can be expressed with explicit synchronization primitives.
There are a few implicit ordering guarantees between commands within a command buffer, but only covering a subset of execution. Additional explicit ordering constraints can be expressed with the various explicit synchronization primitives.
Note
Implementations have significant freedom to overlap execution of work submitted to a queue, and this is common due to deep pipelining and parallelism in Vulkan devices. |
Commands recorded in command buffers either perform actions (draw, dispatch, clear, copy, query/timestamp operations, begin/end subpass operations), set state (bind pipelines, descriptor sets, and buffers, set dynamic state, push constants, set render pass/subpass state), or perform synchronization (set/wait events, pipeline barrier, render pass/subpass dependencies). Some commands perform more than one of these tasks. State setting commands update the current state of the command buffer. Some commands that perform actions (e.g. draw/dispatch) do so based on the current state set cumulatively since the start of the command buffer. The work involved in performing action commands is often allowed to overlap or to be reordered, but doing so must not alter the state to be used by each action command. In general, action commands are those commands that alter framebuffer attachments, read/write buffer or image memory, or write to query pools.
Synchronization commands introduce explicit execution and memory dependencies between two sets of action commands, where the second set of commands depends on the first set of commands. These dependencies enforce both that the execution of certain pipeline stages in the later set occurs after the execution of certain stages in the source set, and that the effects of memory accesses performed by certain pipeline stages occur in order and are visible to each other. When not enforced by an explicit dependency or implicit ordering guarantees, action commands may overlap execution or execute out of order, and may not see the side effects of each other’s memory accesses.
3.3. Object Model
The devices, queues, and other entities in Vulkan are represented by Vulkan objects. At the API level, all objects are referred to by handles. There are two classes of handles, dispatchable and non-dispatchable. Dispatchable handle types are a pointer to an opaque type. This pointer may be used by layers as part of intercepting API commands, and thus each API command takes a dispatchable type as its first parameter. Each object of a dispatchable type must have a unique handle value during its lifetime.
Non-dispatchable handle types are a 64-bit integer type whose meaning is implementation-dependent. Non-dispatchable handles may encode object information directly in the handle rather than acting as a reference to an underlying object, and thus may not have unique handle values. If handle values are not unique, then destroying one such handle must not cause identical handles of other types to become invalid, and must not cause identical handles of the same type to become invalid if that handle value has been created more times than it has been destroyed.
All objects created or allocated from a VkDevice
(i.e. with a
VkDevice
as the first parameter) are private to that device, and must
not be used on other devices.
3.3.1. Object Lifetime
Objects are created or allocated by vkCreate*
and vkAllocate*
commands, respectively.
Once an object is created or allocated, its “structure” is considered to
be immutable, though the contents of certain object types is still free to
change.
Objects are destroyed or freed by vkDestroy*
and vkFree*
commands, respectively.
Objects that are allocated (rather than created) take resources from an existing pool object or memory heap, and when freed return resources to that pool or heap. While object creation and destruction are generally expected to be low-frequency occurrences during runtime, allocating and freeing objects can occur at high frequency. Pool objects help accommodate improved performance of the allocations and frees.
It is an application’s responsibility to track the lifetime of Vulkan objects, and not to destroy them while they are still in use.
The ownership of application-owned memory is immediately acquired by any Vulkan command it is passed into. Ownership of such memory must be released back to the application at the end of the duration of the command, so that the application can alter or free this memory as soon as all the commands that acquired it have returned.
The following object types are consumed when they are passed into a Vulkan command and not further accessed by the objects they are used to create. They must not be destroyed in the duration of any API command they are passed into:
-
VkShaderModule
-
VkPipelineCache
A VkRenderPass
object passed as a parameter to create another object is not further
accessed by that object after the duration of the command it is passed into.
A VkRenderPass
used in a command buffer follows the rules described
below.
A VkPipelineLayout
object must not be destroyed while any command
buffer that uses it is in the recording state.
VkDescriptorSetLayout
objects may be accessed by commands that
operate on descriptor sets allocated using that layout, and those descriptor
sets must not be updated with vkUpdateDescriptorSets after the
descriptor set layout has been destroyed.
Otherwise, a VkDescriptorSetLayout
object passed as a parameter to
create another object is not further accessed by that object after the
duration of the command it is passed into.
The application must not destroy any other type of Vulkan object until all uses of that object by the device (such as via command buffer execution) have completed.
The following Vulkan objects must not be destroyed while any command buffers using the object are in the pending state:
-
VkEvent
-
VkQueryPool
-
VkBuffer
-
VkBufferView
-
VkImage
-
VkImageView
-
VkPipeline
-
VkSampler
-
VkDescriptorPool
-
VkFramebuffer
-
VkRenderPass
-
VkCommandBuffer
-
VkCommandPool
-
VkDeviceMemory
-
VkDescriptorSet
Destroying these objects will move any command buffers that are in the recording or executable state, and are using those objects, to the invalid state.
The following Vulkan objects must not be destroyed while any queue is executing commands that use the object:
-
VkFence
-
VkSemaphore
-
VkCommandBuffer
-
VkCommandPool
In general, objects can be destroyed or freed in any order, even if the object being freed is involved in the use of another object (e.g. use of a resource in a view, use of a view in a descriptor set, use of an object in a command buffer, binding of a memory allocation to a resource), as long as any object that uses the freed object is not further used in any way except to be destroyed or to be reset in such a way that it no longer uses the other object (such as resetting a command buffer). If the object has been reset, then it can be used as if it never used the freed object. An exception to this is when there is a parent/child relationship between objects. In this case, the application must not destroy a parent object before its children, except when the parent is explicitly defined to free its children when it is destroyed (e.g. for pool objects, as defined below).
VkCommandPool
objects are parents of VkCommandBuffer
objects.
VkDescriptorPool
objects are parents of VkDescriptorSet
objects.
VkDevice
objects are parents of many object types (all that take a
VkDevice
as a parameter to their creation).
The following Vulkan objects have specific restrictions for when they can be destroyed:
-
VkQueue
objects cannot be explicitly destroyed. Instead, they are implicitly destroyed when theVkDevice
object they are retrieved from is destroyed. -
Destroying a pool object implicitly frees all objects allocated from that pool. Specifically, destroying
VkCommandPool
frees allVkCommandBuffer
objects that were allocated from it, and destroyingVkDescriptorPool
frees allVkDescriptorSet
objects that were allocated from it. -
VkDevice
objects can be destroyed when allVkQueue
objects retrieved from them are idle, and all objects created from them have been destroyed. This includes the following objects:-
VkFence
-
VkSemaphore
-
VkEvent
-
VkQueryPool
-
VkBuffer
-
VkBufferView
-
VkImage
-
VkImageView
-
VkShaderModule
-
VkPipelineCache
-
VkPipeline
-
VkPipelineLayout
-
VkSampler
-
VkDescriptorSetLayout
-
VkDescriptorPool
-
VkFramebuffer
-
VkRenderPass
-
VkCommandPool
-
VkCommandBuffer
-
VkDeviceMemory
-
-
VkPhysicalDevice
objects cannot be explicitly destroyed. Instead, they are implicitly destroyed when theVkInstance
object they are retrieved from is destroyed. -
VkInstance
objects can be destroyed once allVkDevice
objects created from any of itsVkPhysicalDevice
objects have been destroyed.
3.4. Application Binary Interface
The mechanism by which Vulkan is made available to applications is platform- or implementation- defined. On many platforms the C interface described in this Specification is provided by a shared library. Since shared libraries can be changed independently of the applications that use them, they present particular compatibility challenges, and this Specification places some requirements on them.
Shared library implementations must use the default Application Binary
Interface (ABI) of the standard C compiler for the platform, or provide
customized API headers that cause application code to use the
implementation’s non-default ABI.
An ABI in this context means the size, alignment, and layout of C data
types; the procedure calling convention; and the naming convention for
shared library symbols corresponding to C functions.
Customizing the calling convention for a platform is usually accomplished by
defining calling
convention macros appropriately in vk_platform.h
.
On platforms where Vulkan is provided as a shared library, library symbols beginning with “vk” and followed by a digit or uppercase letter are reserved for use by the implementation. Applications which use Vulkan must not provide definitions of these symbols. This allows the Vulkan shared library to be updated with additional symbols for new API versions or extensions without causing symbol conflicts with existing applications.
Shared library implementations should provide library symbols for commands in the highest version of this Specification they support, and for Window System Integration extensions relevant to the platform. They may also provide library symbols for commands defined by additional extensions.
Note
These requirements and recommendations are intended to allow implementors to take advantage of platform-specific conventions for SDKs, ABIs, library versioning mechanisms, etc. while still minimizing the code changes necessary to port applications or libraries between platforms. Platform vendors, or providers of the de facto standard Vulkan shared library for a platform, are encouraged to document what symbols the shared library provides and how it will be versioned when new symbols are added. Applications should only rely on shared library symbols for commands in the minimum core version required by the application. vkGetInstanceProcAddr and vkGetDeviceProcAddr should be used to obtain function pointers for commands in core versions beyond the application’s minimum required version. |
3.5. Command Syntax and Duration
The Specification describes Vulkan commands as functions or procedures using C99 syntax. Language bindings for other languages such as C++ and JavaScript may allow for stricter parameter passing, or object-oriented interfaces.
Vulkan uses the standard C types for the base type of scalar parameters
(e.g. types from <stdint.h>
), with exceptions described below, or
elsewhere in the text when appropriate:
VkBool32
represents boolean True
and False
values, since C does
not have a sufficiently portable built-in boolean type:
// Provided by VK_VERSION_1_0
typedef uint32_t VkBool32;
VK_TRUE
represents a boolean True (unsigned integer 1) value, and
VK_FALSE
a boolean False (unsigned integer 0) value.
All values returned from a Vulkan implementation in a VkBool32
will
be either VK_TRUE
or VK_FALSE
.
Applications must not pass any other values than VK_TRUE
or
VK_FALSE
into a Vulkan implementation where a VkBool32
is
expected.
VK_TRUE
is a constant representing a VkBool32
True value.
#define VK_TRUE 1U
VK_FALSE
is a constant representing a VkBool32
False value.
#define VK_FALSE 0U
VkDeviceSize
represents device memory size and offset values:
// Provided by VK_VERSION_1_0
typedef uint64_t VkDeviceSize;
VkDeviceAddress
represents device buffer address values:
// Provided by VK_VERSION_1_0
typedef uint64_t VkDeviceAddress;
Commands that create Vulkan objects are of the form vkCreate*
and take
Vk*CreateInfo
structures with the parameters needed to create the
object.
These Vulkan objects are destroyed with commands of the form
vkDestroy*
.
The last in-parameter to each command that creates or destroys a Vulkan
object is pAllocator
.
The pAllocator
parameter can be set to a non-NULL
value such that
allocations for the given object are delegated to an application provided
callback; refer to the Memory Allocation chapter for
further details.
Commands that allocate Vulkan objects owned by pool objects are of the form
vkAllocate*
, and take Vk*AllocateInfo
structures.
These Vulkan objects are freed with commands of the form vkFree*
.
These objects do not take allocators; if host memory is needed, they will
use the allocator that was specified when their parent pool was created.
Commands are recorded into a command buffer by calling API commands of the
form vkCmd*
.
Each such command may have different restrictions on where it can be used:
in a primary and/or secondary command buffer, inside and/or outside a render
pass, and in one or more of the supported queue types.
These restrictions are documented together with the definition of each such
command.
The duration of a Vulkan command refers to the interval between calling the command and its return to the caller.
3.5.1. Lifetime of Retrieved Results
Information is retrieved from the implementation with commands of the form
vkGet*
and vkEnumerate*
.
Unless otherwise specified for an individual command, the results are invariant; that is, they will remain unchanged when retrieved again by calling the same command with the same parameters, so long as those parameters themselves all remain valid.
3.6. Threading Behavior
Vulkan is intended to provide scalable performance when used on multiple host threads. All commands support being called concurrently from multiple threads, but certain parameters, or components of parameters are defined to be externally synchronized. This means that the caller must guarantee that no more than one thread is using such a parameter at a given time.
More precisely, Vulkan commands use simple stores to update the state of Vulkan objects. A parameter declared as externally synchronized may have its contents updated at any time during the host execution of the command. If two commands operate on the same object and at least one of the commands declares the object to be externally synchronized, then the caller must guarantee not only that the commands do not execute simultaneously, but also that the two commands are separated by an appropriate memory barrier (if needed).
Note
Memory barriers are particularly relevant for hosts based on the ARM CPU architecture, which is more weakly ordered than many developers are accustomed to from x86/x64 programming. Fortunately, most higher-level synchronization primitives (like the pthread library) perform memory barriers as a part of mutual exclusion, so mutexing Vulkan objects via these primitives will have the desired effect. |
Similarly the application must avoid any potential data hazard of
application-owned memory that has its
ownership temporarily acquired
by a Vulkan command.
While the ownership of application-owned memory remains acquired by a
command the implementation may read the memory at any point, and it may
write non-const
qualified memory at any point.
Parameters referring to non-const
qualified application-owned memory
are not marked explicitly as externally synchronized in the Specification.
Many object types are immutable, meaning the objects cannot change once they have been created. These types of objects never need external synchronization, except that they must not be destroyed while they are in use on another thread. In certain special cases mutable object parameters are internally synchronized, making external synchronization unnecessary. Any command parameters that are not labeled as externally synchronized are either not mutated by the command or are internally synchronized. Additionally, certain objects related to a command’s parameters (e.g. command pools and descriptor pools) may be affected by a command, and must also be externally synchronized. These implicit parameters are documented as described below.
Parameters of commands that are externally synchronized are listed below.
There are also a few instances where a command can take in a user allocated list whose contents are externally synchronized parameters. In these cases, the caller must guarantee that at most one thread is using a given element within the list at a given time. These parameters are listed below.
In addition, there are some implicit parameters that need to be externally
synchronized.
For example, all commandBuffer
parameters that need to be externally
synchronized imply that the commandPool
that was passed in when
creating that command buffer also needs to be externally synchronized.
The implicit parameters and their associated object are listed below.
3.7. Valid Usage
Valid usage defines a set of conditions which must be met in order to achieve well-defined runtime behavior in an application. These conditions depend only on Vulkan state, and the parameters or objects whose usage is constrained by the condition.
The core layer assumes applications are using the API correctly. Except as documented elsewhere in the Specification, the behavior of the core layer to an application using the API incorrectly is undefined, and may include program termination. However, implementations must ensure that incorrect usage by an application does not affect the integrity of the operating system, the Vulkan implementation, or other Vulkan client applications in the system. In particular, any guarantees made by an operating system about whether memory from one process can be visible to another process or not must not be violated by a Vulkan implementation for any memory allocation. Vulkan implementations are not required to make additional security or integrity guarantees beyond those provided by the OS unless explicitly directed by the application’s use of a particular feature or extension.
Note
For instance, if an operating system guarantees that data in all its memory allocations are set to zero when newly allocated, the Vulkan implementation must make the same guarantees for any allocations it controls (e.g. VkDeviceMemory). Similarly, if an operating system guarantees that use-after-free of host allocations will not result in values written by another process becoming visible, the same guarantees must be made by the Vulkan implementation for device memory. |
Some valid usage conditions have dependencies on runtime limits or feature availability. It is possible to validate these conditions against Vulkan’s minimum supported values for these limits and features, or some subset of other known values.
Valid usage conditions do not cover conditions where well-defined behavior (including returning an error code) exists.
Valid usage conditions should apply to the command or structure where complete information about the condition would be known during execution of an application. This is such that a validation layer or linter can be written directly against these statements at the point they are specified.
Note
This does lead to some non-obvious places for valid usage statements. For instance, the valid values for a structure might depend on a separate value in the calling command. In this case, the structure itself will not reference this valid usage as it is impossible to determine validity from the structure that it is invalid - instead this valid usage would be attached to the calling command. Another example is draw state - the state setters are independent, and can cause a legitimately invalid state configuration between draw calls; so the valid usage statements are attached to the place where all state needs to be valid - at the drawing command. |
Valid usage conditions are described in a block labelled “Valid Usage” following each command or structure they apply to.
3.7.1. Usage Validation
Vulkan is a layered API. The lowest layer is the core Vulkan layer, as defined by this Specification. The application can use additional layers above the core for debugging, validation, and other purposes.
One of the core principles of Vulkan is that building and submitting command buffers should be highly efficient. Thus error checking and validation of state in the core layer is minimal, although more rigorous validation can be enabled through the use of layers.
Validation of correct API usage is left to validation layers. Applications should be developed with validation layers enabled, to help catch and eliminate errors. Once validated, released applications should not enable validation layers by default.
3.7.2. Implicit Valid Usage
Some valid usage conditions apply to all commands and structures in the API, unless explicitly denoted otherwise for a specific command or structure. These conditions are considered implicit, and are described in a block labelled “Valid Usage (Implicit)” following each command or structure they apply to. Implicit valid usage conditions are described in detail below.
Valid Usage for Object Handles
Any input parameter to a command that is an object handle must be a valid object handle, unless otherwise specified. An object handle is valid if:
-
It has been created or allocated by a previous, successful call to the API. Such calls are noted in the Specification.
-
It has not been deleted or freed by a previous call to the API. Such calls are noted in the Specification.
-
Any objects used by that object, either as part of creation or execution, must also be valid.
The reserved values VK_NULL_HANDLE and NULL
can be used in place of
valid non-dispatchable handles and dispatchable handles, respectively, when
explicitly called out in the Specification.
Any command that creates an object successfully must not return these
values.
It is valid to pass these values to vkDestroy*
or vkFree*
commands, which will silently ignore these values.
Valid Usage for Pointers
Any parameter that is a pointer must be a valid pointer only if it is explicitly called out by a Valid Usage statement.
A pointer is “valid” if it points at memory containing values of the number and type(s) expected by the command, and all fundamental types accessed through the pointer (e.g. as elements of an array or as members of a structure) satisfy the alignment requirements of the host processor.
Valid Usage for Strings
Any parameter that is a pointer to char
must be a finite sequence of
values terminated by a null character, or if explicitly called out in the
Specification, can be NULL
.
Valid Usage for Enumerated Types
Any parameter of an enumerated type must be a valid enumerant for that type. A enumerant is valid if:
-
The enumerant is defined as part of the enumerated type.
-
The enumerant is not the special value (suffixed with
_MAX_ENUM
1) defined for the enumerated type.- 1
-
This special value exists only to ensure that C
enum
types are 32 bits in size. It is not part of the API, and should not be used by applications.
Any enumerated type returned from a query command or otherwise output from Vulkan to the application must not have a reserved value. Reserved values are values not defined by any extension for that enumerated type.
Note
This language is intended to accommodate cases such as “hidden” extensions known only to driver internals, or layers enabling extensions without knowledge of the application, without allowing return of values not defined by any extension. |
Note
Application developers are encouraged to be careful when using |
Valid Usage for Flags
A collection of flags is represented by a bitmask using the type
VkFlags
:
// Provided by VK_VERSION_1_0
typedef uint32_t VkFlags;
Bitmasks are passed to many commands and structures to compactly represent
options, but VkFlags
is not used directly in the API.
Instead, a Vk*Flags
type which is an alias of VkFlags
, and
whose name matches the corresponding Vk*FlagBits
that are valid for
that type, is used.
Any Vk*Flags
member or parameter used in the API as an input must be
a valid combination of bit flags.
A valid combination is either zero or the bitwise OR of valid bit flags.
A bit flag is valid if:
-
The bit flag is defined as part of the
Vk*FlagBits
type, where the bits type is obtained by taking the flag type and replacing the trailingFlags
withFlagBits
. For example, a flag value of type VkColorComponentFlags must contain only bit flags defined by VkColorComponentFlagBits. -
The flag is allowed in the context in which it is being used. For example, in some cases, certain bit flags or combinations of bit flags are mutually exclusive.
Any Vk*Flags
member or parameter returned from a query command or
otherwise output from Vulkan to the application may contain bit flags
undefined in its corresponding Vk*FlagBits
type.
An application cannot rely on the state of these unspecified bits.
Only the low-order 31 bits (bit positions zero through 30) are available for use as flag bits.
Note
This restriction is due to poorly defined behavior by C compilers given a C
enumerant value of |
Valid Usage for Structure Types
Any parameter that is a structure containing a sType
member must have
a value of sType
which is a valid VkStructureType value matching
the type of the structure.
Valid Usage for Structure Pointer Chains
Any parameter that is a structure containing a void*
pNext
member
must have a value of pNext
that is either NULL
, or is a pointer to
a valid extending structure, containing sType
and pNext
members as described in the Vulkan Documentation and
Extensions document in the section “Extension Interactions”.
The set of structures connected by pNext
pointers is referred to as a
pNext
chain.
Each structure included in the pNext
chain must be defined at runtime
by either:
-
a core version which is supported
-
an extension which is enabled
Each type of extending structure must not appear more than once in a
pNext
chain, including any
aliases.
This general rule may be explicitly overridden for specific structures.
Any component of the implementation (the loader, any enabled layers, and
drivers) must skip over, without processing (other than reading the
sType
and pNext
members) any extending structures in the chain
not defined by core versions or extensions supported by that component.
As a convenience to implementations and layers needing to iterate through a structure pointer chain, the Vulkan API provides two base structures. These structures allow for some type safety, and can be used by Vulkan API functions that operate on generic inputs and outputs.
The VkBaseInStructure
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkBaseInStructure {
VkStructureType sType;
const struct VkBaseInStructure* pNext;
} VkBaseInStructure;
-
sType
is the structure type of the structure being iterated through. -
pNext
isNULL
or a pointer to the next structure in a structure chain.
VkBaseInStructure
can be used to facilitate iterating through a
read-only structure pointer chain.
The VkBaseOutStructure
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkBaseOutStructure {
VkStructureType sType;
struct VkBaseOutStructure* pNext;
} VkBaseOutStructure;
-
sType
is the structure type of the structure being iterated through. -
pNext
isNULL
or a pointer to the next structure in a structure chain.
VkBaseOutStructure
can be used to facilitate iterating through a
structure pointer chain that returns data back to the application.
Valid Usage for Nested Structures
The above conditions also apply recursively to members of structures provided as input to a command, either as a direct argument to the command, or themselves a member of another structure.
Specifics on valid usage of each command are covered in their individual sections.
Valid Usage for Extensions
Instance-level functionality or behavior added by an instance extension to the API must not be used unless that extension is supported by the instance as determined by vkEnumerateInstanceExtensionProperties, and that extension is enabled in VkInstanceCreateInfo.
Physical-device-level functionality or behavior added by an instance extension to the API must not be used unless that extension is supported by the instance as determined by vkEnumerateInstanceExtensionProperties, and that extension is enabled in VkInstanceCreateInfo.
Device functionality or behavior added by a device extension to the API must not be used unless that extension is supported by the device as determined by vkEnumerateDeviceExtensionProperties, and that extension is enabled in VkDeviceCreateInfo.
Valid Usage for Newer Core Versions
Physical-device-level functionality or behavior added by a new
core version of the API must not be used unless it is supported by the
physical device as determined by
VkPhysicalDeviceProperties::apiVersion
and the specified version
of VkApplicationInfo::apiVersion
.
Device-level functionality or behavior added by a new core
version of the API must not be used unless it is supported by the device
as determined by VkPhysicalDeviceProperties::apiVersion
and the
specified version of VkApplicationInfo::apiVersion
.
3.8. VkResult
Return Codes
While the core Vulkan API is not designed to capture incorrect usage, some circumstances still require return codes. Commands in Vulkan return their status via return codes that are in one of two categories:
-
Successful completion codes are returned when a command needs to communicate success or status information. All successful completion codes are non-negative values.
-
Run time error codes are returned when a command needs to communicate a failure that could only be detected at runtime. All runtime error codes are negative values.
All return codes in Vulkan are reported via VkResult return values. The possible codes are:
// Provided by VK_VERSION_1_0
typedef enum VkResult {
VK_SUCCESS = 0,
VK_NOT_READY = 1,
VK_TIMEOUT = 2,
VK_EVENT_SET = 3,
VK_EVENT_RESET = 4,
VK_INCOMPLETE = 5,
VK_ERROR_OUT_OF_HOST_MEMORY = -1,
VK_ERROR_OUT_OF_DEVICE_MEMORY = -2,
VK_ERROR_INITIALIZATION_FAILED = -3,
VK_ERROR_DEVICE_LOST = -4,
VK_ERROR_MEMORY_MAP_FAILED = -5,
VK_ERROR_LAYER_NOT_PRESENT = -6,
VK_ERROR_EXTENSION_NOT_PRESENT = -7,
VK_ERROR_FEATURE_NOT_PRESENT = -8,
VK_ERROR_INCOMPATIBLE_DRIVER = -9,
VK_ERROR_TOO_MANY_OBJECTS = -10,
VK_ERROR_FORMAT_NOT_SUPPORTED = -11,
VK_ERROR_FRAGMENTED_POOL = -12,
VK_ERROR_UNKNOWN = -13,
} VkResult;
-
VK_SUCCESS
Command successfully completed -
VK_NOT_READY
A fence or query has not yet completed -
VK_TIMEOUT
A wait operation has not completed in the specified time -
VK_EVENT_SET
An event is signaled -
VK_EVENT_RESET
An event is unsignaled -
VK_INCOMPLETE
A return array was too small for the result
-
VK_ERROR_OUT_OF_HOST_MEMORY
A host memory allocation has failed. -
VK_ERROR_OUT_OF_DEVICE_MEMORY
A device memory allocation has failed. -
VK_ERROR_INITIALIZATION_FAILED
Initialization of an object could not be completed for implementation-specific reasons. -
VK_ERROR_DEVICE_LOST
The logical or physical device has been lost. See Lost Device -
VK_ERROR_MEMORY_MAP_FAILED
Mapping of a memory object has failed. -
VK_ERROR_LAYER_NOT_PRESENT
A requested layer is not present or could not be loaded. -
VK_ERROR_EXTENSION_NOT_PRESENT
A requested extension is not supported. -
VK_ERROR_FEATURE_NOT_PRESENT
A requested feature is not supported. -
VK_ERROR_INCOMPATIBLE_DRIVER
The requested version of Vulkan is not supported by the driver or is otherwise incompatible for implementation-specific reasons. -
VK_ERROR_TOO_MANY_OBJECTS
Too many objects of the type have already been created. -
VK_ERROR_FORMAT_NOT_SUPPORTED
A requested format is not supported on this device. -
VK_ERROR_FRAGMENTED_POOL
A pool allocation has failed due to fragmentation of the pool’s memory. This must only be returned if no attempt to allocate host or device memory was made to accommodate the new allocation. -
VK_ERROR_UNKNOWN
An unknown error has occurred; either the application has provided invalid input, or an implementation failure has occurred.
If a command returns a runtime error, unless otherwise specified any output
parameters will have undefined contents, except that if the output
parameter is a structure with sType
and pNext
fields, those
fields will be unmodified.
Any structures chained from pNext
will also have undefined contents,
except that sType
and pNext
will be unmodified.
VK_ERROR_OUT_OF_*_MEMORY
errors do not modify any currently existing
Vulkan objects.
Objects that have already been successfully created can still be used by
the application.
Note
As a general rule, |
VK_ERROR_UNKNOWN
will be returned by an implementation when an
unexpected error occurs that cannot be attributed to valid behavior of the
application and implementation.
Under these conditions, it may be returned from any command returning a
VkResult.
Note
|
Performance-critical commands generally do not have return codes.
If a runtime error occurs in such commands, the implementation will defer
reporting the error until a specified point.
For commands that record into command buffers (vkCmd*
) runtime errors
are reported by vkEndCommandBuffer
.
3.9. Numeric Representation and Computation
Implementations normally perform computations in floating-point, and must meet the range and precision requirements defined under “Floating-Point Computation” below.
These requirements only apply to computations performed in Vulkan operations outside of shader execution, such as texture image specification and sampling, and per-fragment operations. Range and precision requirements during shader execution differ and are specified by the Precision and Operation of SPIR-V Instructions section.
In some cases, the representation and/or precision of operations is implicitly limited by the specified format of vertex or texel data consumed by Vulkan. Specific floating-point formats are described later in this section.
3.9.1. Floating-Point Computation
Most floating-point computation is performed in SPIR-V shader modules. The properties of computation within shaders are constrained as defined by the Precision and Operation of SPIR-V Instructions section.
Some floating-point computation is performed outside of shaders, such as viewport and depth range calculations. For these computations, we do not specify how floating-point numbers are to be represented, or the details of how operations on them are performed, but only place minimal requirements on representation and precision as described in the remainder of this section.
We require simply that numbers’ floating-point parts contain enough bits and that their exponent fields are large enough so that individual results of floating-point operations are accurate to about 1 part in 105. The maximum representable magnitude for all floating-point values must be at least 232.
-
x × 0 = 0 × x = 0 for any non-infinite and non-NaN x.
-
1 × x = x × 1 = x.
-
x + 0 = 0 + x = x.
-
00 = 1.
Occasionally, further requirements will be specified. Most single-precision floating-point formats meet these requirements.
The special values Inf and -Inf encode values with magnitudes too large to be represented; the special value NaN encodes “Not A Number” values resulting from undefined arithmetic operations such as 0 / 0. Implementations may support Inf and NaN in their floating-point computations.
3.9.2. Floating-Point Format Conversions
When a value is converted to a defined floating-point representation, finite values falling between two representable finite values are rounded to one or the other. The rounding mode is not defined. Finite values whose magnitude is larger than that of any representable finite value may be rounded either to the closest representable finite value or to the appropriately signed infinity. For unsigned destination formats any negative values are converted to zero. Positive infinity is converted to positive infinity; negative infinity is converted to negative infinity in signed formats and to zero in unsigned formats; and any NaN is converted to a NaN.
3.9.3. 16-Bit Floating-Point Numbers
16-bit floating point numbers are defined in the “16-bit floating point numbers” section of the Khronos Data Format Specification.
3.9.4. Unsigned 11-Bit Floating-Point Numbers
Unsigned 11-bit floating point numbers are defined in the “Unsigned 11-bit floating point numbers” section of the Khronos Data Format Specification.
3.9.5. Unsigned 10-Bit Floating-Point Numbers
Unsigned 10-bit floating point numbers are defined in the “Unsigned 10-bit floating point numbers” section of the Khronos Data Format Specification.
3.9.6. General Requirements
Any representable floating-point value in the appropriate format is legal as input to a Vulkan command that requires floating-point data. The result of providing a value that is not a floating-point number to such a command is unspecified, but must not lead to Vulkan interruption or termination. For example, providing a negative zero (where applicable) or a denormalized number to a Vulkan command must yield deterministic results, while providing a NaN or Inf yields unspecified results.
Some calculations require division. In such cases (including implied divisions performed by vector normalization), division by zero produces an unspecified result but must not lead to Vulkan interruption or termination.
3.10. Fixed-Point Data Conversions
When generic vertex attributes and pixel color or depth components are represented as integers, they are often (but not always) considered to be normalized. Normalized integer values are treated specially when being converted to and from floating-point values, and are usually referred to as normalized fixed-point.
In the remainder of this section, b denotes the bit width of the fixed-point integer representation. When the integer is one of the types defined by the API, b is the bit width of that type. When the integer comes from an image containing color or depth component texels, b is the number of bits allocated to that component in its specified image format.
The signed and unsigned fixed-point representations are assumed to be b-bit binary two’s-complement integers and binary unsigned integers, respectively.
3.10.1. Conversion from Normalized Fixed-Point to Floating-Point
Unsigned normalized fixed-point integers represent numbers in the range [0,1]. The conversion from an unsigned normalized fixed-point value c to the corresponding floating-point value f is defined as
Signed normalized fixed-point integers represent numbers in the range [-1,1]. The conversion from a signed normalized fixed-point value c to the corresponding floating-point value f is performed using
Only the range [-2b-1 + 1, 2b-1 - 1] is used to represent signed fixed-point values in the range [-1,1]. For example, if b = 8, then the integer value -127 corresponds to -1.0 and the value 127 corresponds to 1.0. This equation is used everywhere that signed normalized fixed-point values are converted to floating-point.
Note that while zero is exactly expressible in this representation, one value (-128 in the example) is outside the representable range, and implementations must clamp it to -1.0. Where the value is subject to further processing by the implementation, e.g. during texture filtering, values less than -1.0 may be used but the result must be clamped before the value is returned to shaders.
3.10.2. Conversion from Floating-Point to Normalized Fixed-Point
The conversion from a floating-point value f to the corresponding unsigned normalized fixed-point value c is defined by first clamping f to the range [0,1], then computing
-
c = convertFloatToUint(f × (2b - 1), b)
where convertFloatToUint(r,b) returns one of the two unsigned binary integer values with exactly b bits which are closest to the floating-point value r. Implementations should round to nearest. If r is equal to an integer, then that integer value must be returned. In particular, if f is equal to 0.0 or 1.0, then c must be assigned 0 or 2b - 1, respectively.
The conversion from a floating-point value f to the corresponding signed normalized fixed-point value c is performed by clamping f to the range [-1,1], then computing
-
c = convertFloatToInt(f × (2b-1 - 1), b)
where convertFloatToInt(r,b) returns one of the two signed two’s-complement binary integer values with exactly b bits which are closest to the floating-point value r. Implementations should round to nearest. If r is equal to an integer, then that integer value must be returned. In particular, if f is equal to -1.0, 0.0, or 1.0, then c must be assigned -(2b-1 - 1), 0, or 2b-1 - 1, respectively.
This equation is used everywhere that floating-point values are converted to signed normalized fixed-point.
3.11. Common Object Types
Some types of Vulkan objects are used in many different structures and command parameters, and are described here. These types include offsets, extents, and rectangles.
3.11.1. Offsets
Offsets are used to describe a pixel location within an image or framebuffer, as an (x,y) location for two-dimensional images, or an (x,y,z) location for three-dimensional images.
A two-dimensional offset is defined by the structure:
// Provided by VK_VERSION_1_0
typedef struct VkOffset2D {
int32_t x;
int32_t y;
} VkOffset2D;
-
x
is the x offset. -
y
is the y offset.
A three-dimensional offset is defined by the structure:
// Provided by VK_VERSION_1_0
typedef struct VkOffset3D {
int32_t x;
int32_t y;
int32_t z;
} VkOffset3D;
-
x
is the x offset. -
y
is the y offset. -
z
is the z offset.
3.11.2. Extents
Extents are used to describe the size of a rectangular region of pixels within an image or framebuffer, as (width,height) for two-dimensional images, or as (width,height,depth) for three-dimensional images.
A two-dimensional extent is defined by the structure:
// Provided by VK_VERSION_1_0
typedef struct VkExtent2D {
uint32_t width;
uint32_t height;
} VkExtent2D;
-
width
is the width of the extent. -
height
is the height of the extent.
A three-dimensional extent is defined by the structure:
// Provided by VK_VERSION_1_0
typedef struct VkExtent3D {
uint32_t width;
uint32_t height;
uint32_t depth;
} VkExtent3D;
-
width
is the width of the extent. -
height
is the height of the extent. -
depth
is the depth of the extent.
3.11.3. Rectangles
Rectangles are used to describe a specified rectangular region of pixels within an image or framebuffer. Rectangles include both an offset and an extent of the same dimensionality, as described above. Two-dimensional rectangles are defined by the structure
// Provided by VK_VERSION_1_0
typedef struct VkRect2D {
VkOffset2D offset;
VkExtent2D extent;
} VkRect2D;
-
offset
is a VkOffset2D specifying the rectangle offset. -
extent
is a VkExtent2D specifying the rectangle extent.
3.11.4. Structure Types
Each value corresponds to a particular structure with a sType
member
with a matching name.
As a general rule, the name of each VkStructureType value is obtained
by taking the name of the structure, stripping the leading Vk
,
prefixing each capital letter with _
, converting the entire resulting
string to upper case, and prefixing it with VK_STRUCTURE_TYPE_
.
For example, structures of type VkImageCreateInfo correspond to a
VkStructureType of VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO
, and thus
its sType
member must equal that when it is passed to the API.
The values VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO
and
VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO
are reserved for internal
use by the loader, and do not have corresponding Vulkan structures in this
Specification.
Structure types supported by the Vulkan API include:
// Provided by VK_VERSION_1_0
typedef enum VkStructureType {
VK_STRUCTURE_TYPE_APPLICATION_INFO = 0,
VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO = 1,
VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO = 2,
VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO = 3,
VK_STRUCTURE_TYPE_SUBMIT_INFO = 4,
VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO = 5,
VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE = 6,
VK_STRUCTURE_TYPE_BIND_SPARSE_INFO = 7,
VK_STRUCTURE_TYPE_FENCE_CREATE_INFO = 8,
VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO = 9,
VK_STRUCTURE_TYPE_EVENT_CREATE_INFO = 10,
VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO = 11,
VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO = 12,
VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO = 13,
VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO = 14,
VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO = 15,
VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO = 16,
VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO = 17,
VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO = 18,
VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO = 19,
VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO = 20,
VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_STATE_CREATE_INFO = 21,
VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO = 22,
VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO = 23,
VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO = 24,
VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO = 25,
VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO = 26,
VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO = 27,
VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO = 28,
VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO = 29,
VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO = 30,
VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO = 31,
VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO = 32,
VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO = 33,
VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO = 34,
VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET = 35,
VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET = 36,
VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO = 37,
VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO = 38,
VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO = 39,
VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO = 40,
VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO = 41,
VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO = 42,
VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO = 43,
VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER = 44,
VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER = 45,
VK_STRUCTURE_TYPE_MEMORY_BARRIER = 46,
VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO = 47,
VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO = 48,
} VkStructureType;
3.12. API Name Aliases
A small number of APIs did not follow the naming conventions when initially defined. For consistency, when we discover an API name that violates the naming conventions, we rename it in the Specification, XML, and header files. For backwards compatibility, the original (incorrect) name is retained as a “typo alias”. The alias is deprecated and should not be used, but will be retained indefinitely.
Note
|
4. Initialization
Before using Vulkan, an application must initialize it by loading the
Vulkan commands, and creating a VkInstance
object.
4.1. Command Function Pointers
Vulkan commands are not necessarily exposed by static linking on a platform. Commands to query function pointers for Vulkan commands are described below.
Note
When extensions are promoted or otherwise incorporated into another extension or Vulkan core version, command aliases may be included. Whilst the behavior of each command alias is identical, the behavior of retrieving each alias’s function pointer is not. A function pointer for a given alias can only be retrieved if the extension or version that introduced that alias is supported and enabled, irrespective of whether any other alias is available. |
Function pointers for all Vulkan commands can be obtained with the command:
// Provided by VK_VERSION_1_0
PFN_vkVoidFunction vkGetInstanceProcAddr(
VkInstance instance,
const char* pName);
-
instance
is the instance that the function pointer will be compatible with, orNULL
for commands not dependent on any instance. -
pName
is the name of the command to obtain.
vkGetInstanceProcAddr
itself is obtained in a platform- and loader-
specific manner.
Typically, the loader library will export this command as a function symbol,
so applications can link against the loader library, or load it dynamically
and look up the symbol using platform-specific APIs.
The table below defines the various use cases for
vkGetInstanceProcAddr
and expected return value (“fp” is “function
pointer”) for each case.
A valid returned function pointer (“fp”) must not be NULL
.
The returned function pointer is of type PFN_vkVoidFunction, and must be cast to the type of the command being queried before use.
instance |
pName |
return value |
---|---|---|
*1 |
|
undefined |
invalid non- |
*1 |
undefined |
|
global command2 |
fp |
instance |
fp |
|
instance |
core dispatchable command |
fp3 |
instance |
enabled instance extension dispatchable command for |
fp3 |
instance |
available device extension4 dispatchable command for |
fp3 |
any other case, not covered above |
|
- 1
-
"*" means any representable value for the parameter (including valid values, invalid values, and
NULL
). - 2
-
The global commands are: vkEnumerateInstanceExtensionProperties, vkEnumerateInstanceLayerProperties, and vkCreateInstance. Dispatchable commands are all other commands which are not global.
- 3
-
The returned function pointer must only be called with a dispatchable object (the first parameter) that is
instance
or a child ofinstance
, e.g. VkInstance, VkPhysicalDevice, VkDevice, VkQueue, or VkCommandBuffer. - 4
-
An “available device extension” is a device extension supported by any physical device enumerated by
instance
.
In order to support systems with multiple Vulkan implementations, the
function pointers returned by vkGetInstanceProcAddr
may point to
dispatch code that calls a different real implementation for different
VkDevice objects or their child objects.
The overhead of the internal dispatch for VkDevice objects can be
avoided by obtaining device-specific function pointers for any commands that
use a device or device-child object as their dispatchable object.
Such function pointers can be obtained with the command:
// Provided by VK_VERSION_1_0
PFN_vkVoidFunction vkGetDeviceProcAddr(
VkDevice device,
const char* pName);
The table below defines the various use cases for vkGetDeviceProcAddr
and expected return value (“fp” is “function pointer”) for each case.
A valid returned function pointer (“fp”) must not be NULL
.
The returned function pointer is of type PFN_vkVoidFunction, and must
be cast to the type of the command being queried before use.
The function pointer must only be called with a dispatchable object (the
first parameter) that is device
or a child of device
.
device |
pName |
return value |
---|---|---|
|
*1 |
undefined |
invalid device |
*1 |
undefined |
device |
|
undefined |
device |
requested core version2 device-level dispatchable command3 |
fp4 |
device |
enabled extension device-level dispatchable command3 |
fp4 |
any other case, not covered above |
|
- 1
-
"*" means any representable value for the parameter (including valid values, invalid values, and
NULL
). - 2
-
Device-level commands which are part of the core version specified by VkApplicationInfo::
apiVersion
when creating the instance will always return a valid function pointer. Core commands beyond that version which are supported by the implementation may either returnNULL
or a function pointer, though the function pointer must not be called. - 3
-
In this function, device-level excludes all physical-device-level commands.
- 4
-
The returned function pointer must only be called with a dispatchable object (the first parameter) that is
device
or a child ofdevice
e.g. VkDevice, VkQueue, or VkCommandBuffer.
The definition of PFN_vkVoidFunction is:
// Provided by VK_VERSION_1_0
typedef void (VKAPI_PTR *PFN_vkVoidFunction)(void);
This type is returned from command function pointer queries, and must be cast to an actual command function pointer before use.
4.2. Instances
There is no global state in Vulkan and all per-application state is stored
in a VkInstance
object.
Creating a VkInstance
object initializes the Vulkan library and allows
the application to pass information about itself to the implementation.
Instances are represented by VkInstance
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_HANDLE(VkInstance)
To create an instance object, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateInstance(
const VkInstanceCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkInstance* pInstance);
-
pCreateInfo
is a pointer to a VkInstanceCreateInfo structure controlling creation of the instance. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pInstance
points a VkInstance handle in which the resulting instance is returned.
vkCreateInstance
verifies that the requested layers exist.
If not, vkCreateInstance
will return VK_ERROR_LAYER_NOT_PRESENT
.
Next vkCreateInstance
verifies that the requested extensions are
supported (e.g. in the implementation or in any enabled instance layer) and
if any requested extension is not supported, vkCreateInstance
must
return VK_ERROR_EXTENSION_NOT_PRESENT
.
After verifying and enabling the instance layers and extensions the
VkInstance
object is created and returned to the application.
If a requested extension is only supported by a layer, both the layer and
the extension need to be specified at vkCreateInstance
time for the
creation to succeed.
The VkInstanceCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkInstanceCreateInfo {
VkStructureType sType;
const void* pNext;
VkInstanceCreateFlags flags;
const VkApplicationInfo* pApplicationInfo;
uint32_t enabledLayerCount;
const char* const* ppEnabledLayerNames;
uint32_t enabledExtensionCount;
const char* const* ppEnabledExtensionNames;
} VkInstanceCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is a bitmask of VkInstanceCreateFlagBits indicating the behavior of the instance. -
pApplicationInfo
isNULL
or a pointer to aVkApplicationInfo
structure. If notNULL
, this information helps implementations recognize behavior inherent to classes of applications. VkApplicationInfo is defined in detail below. -
enabledLayerCount
is the number of global layers to enable. -
ppEnabledLayerNames
is a pointer to an array ofenabledLayerCount
null-terminated UTF-8 strings containing the names of layers to enable for the created instance. The layers are loaded in the order they are listed in this array, with the first array element being the closest to the application, and the last array element being the closest to the driver. See the Layers section for further details. -
enabledExtensionCount
is the number of global extensions to enable. -
ppEnabledExtensionNames
is a pointer to an array ofenabledExtensionCount
null-terminated UTF-8 strings containing the names of extensions to enable.
// Provided by VK_VERSION_1_0
typedef enum VkInstanceCreateFlagBits {
} VkInstanceCreateFlagBits;
All bits for this type are defined by extensions.
// Provided by VK_VERSION_1_0
typedef VkFlags VkInstanceCreateFlags;
VkInstanceCreateFlags
is a bitmask type for setting a mask, but is
currently reserved for future use.
The VkApplicationInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkApplicationInfo {
VkStructureType sType;
const void* pNext;
const char* pApplicationName;
uint32_t applicationVersion;
const char* pEngineName;
uint32_t engineVersion;
uint32_t apiVersion;
} VkApplicationInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
pApplicationName
isNULL
or is a pointer to a null-terminated UTF-8 string containing the name of the application. -
applicationVersion
is an unsigned integer variable containing the developer-supplied version number of the application. -
pEngineName
isNULL
or is a pointer to a null-terminated UTF-8 string containing the name of the engine (if any) used to create the application. -
engineVersion
is an unsigned integer variable containing the developer-supplied version number of the engine used to create the application. -
apiVersion
is the version of the Vulkan API against which the application expects to run, encoded as described in Version Numbers. IfapiVersion
is 0 the implementation must ignore it, otherwise if the implementation does not support the requestedapiVersion
, or an effective substitute forapiVersion
, it must returnVK_ERROR_INCOMPATIBLE_DRIVER
. The patch version number specified inapiVersion
is ignored when creating an instance object. Only the major and minor versions of the instance must match those requested inapiVersion
.
To destroy an instance, call:
// Provided by VK_VERSION_1_0
void vkDestroyInstance(
VkInstance instance,
const VkAllocationCallbacks* pAllocator);
-
instance
is the handle of the instance to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
5. Devices and Queues
Once Vulkan is initialized, devices and queues are the primary objects used to interact with a Vulkan implementation.
Vulkan separates the concept of physical and logical devices. A physical device usually represents a single complete implementation of Vulkan (excluding instance-level functionality) available to the host, of which there are a finite number. A logical device represents an instance of that implementation with its own state and resources independent of other logical devices.
Physical devices are represented by VkPhysicalDevice
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_HANDLE(VkPhysicalDevice)
5.1. Physical Devices
To retrieve a list of physical device objects representing the physical devices installed in the system, call:
// Provided by VK_VERSION_1_0
VkResult vkEnumeratePhysicalDevices(
VkInstance instance,
uint32_t* pPhysicalDeviceCount,
VkPhysicalDevice* pPhysicalDevices);
-
instance
is a handle to a Vulkan instance previously created with vkCreateInstance. -
pPhysicalDeviceCount
is a pointer to an integer related to the number of physical devices available or queried, as described below. -
pPhysicalDevices
is eitherNULL
or a pointer to an array ofVkPhysicalDevice
handles.
If pPhysicalDevices
is NULL
, then the number of physical devices
available is returned in pPhysicalDeviceCount
.
Otherwise, pPhysicalDeviceCount
must point to a variable set by the
user to the number of elements in the pPhysicalDevices
array, and on
return the variable is overwritten with the number of handles actually
written to pPhysicalDevices
.
If pPhysicalDeviceCount
is less than the number of physical devices
available, at most pPhysicalDeviceCount
structures will be written,
and VK_INCOMPLETE
will be returned instead of VK_SUCCESS
, to
indicate that not all the available physical devices were returned.
To query general properties of physical devices once enumerated, call:
// Provided by VK_VERSION_1_0
void vkGetPhysicalDeviceProperties(
VkPhysicalDevice physicalDevice,
VkPhysicalDeviceProperties* pProperties);
-
physicalDevice
is the handle to the physical device whose properties will be queried. -
pProperties
is a pointer to a VkPhysicalDeviceProperties structure in which properties are returned.
The VkPhysicalDeviceProperties
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkPhysicalDeviceProperties {
uint32_t apiVersion;
uint32_t driverVersion;
uint32_t vendorID;
uint32_t deviceID;
VkPhysicalDeviceType deviceType;
char deviceName[VK_MAX_PHYSICAL_DEVICE_NAME_SIZE];
uint8_t pipelineCacheUUID[VK_UUID_SIZE];
VkPhysicalDeviceLimits limits;
VkPhysicalDeviceSparseProperties sparseProperties;
} VkPhysicalDeviceProperties;
-
apiVersion
is the version of Vulkan supported by the device, encoded as described in Version Numbers. -
driverVersion
is the vendor-specified version of the driver. -
vendorID
is a unique identifier for the vendor (see below) of the physical device. -
deviceID
is a unique identifier for the physical device among devices available from the vendor. -
deviceType
is a VkPhysicalDeviceType specifying the type of device. -
deviceName
is an array ofVK_MAX_PHYSICAL_DEVICE_NAME_SIZE
char
containing a null-terminated UTF-8 string which is the name of the device. -
pipelineCacheUUID
is an array ofVK_UUID_SIZE
uint8_t
values representing a universally unique identifier for the device. -
limits
is the VkPhysicalDeviceLimits structure specifying device-specific limits of the physical device. See Limits for details. -
sparseProperties
is the VkPhysicalDeviceSparseProperties structure specifying various sparse related properties of the physical device. See Sparse Properties for details.
Note
The encoding of |
The vendorID
and deviceID
fields are provided to allow
applications to adapt to device characteristics that are not adequately
exposed by other Vulkan queries.
Note
These may include performance profiles, hardware errata, or other characteristics. |
The vendor identified by vendorID
is the entity responsible for the
most salient characteristics of the underlying implementation of the
VkPhysicalDevice being queried.
Note
For example, in the case of a discrete GPU implementation, this should be the GPU chipset vendor. In the case of a hardware accelerator integrated into a system-on-chip (SoC), this should be the supplier of the silicon IP used to create the accelerator. |
If the vendor has a PCI
vendor ID, the low 16 bits of vendorID
must contain that PCI vendor
ID, and the remaining bits must be set to zero.
Otherwise, the value returned must be a valid Khronos vendor ID, obtained
as described in the Vulkan Documentation and Extensions:
Procedures and Conventions document in the section “Registering a Vendor
ID with Khronos”.
Khronos vendor IDs are allocated starting at 0x10000, to distinguish them
from the PCI vendor ID namespace.
Khronos vendor IDs are symbolically defined in the VkVendorId type.
The vendor is also responsible for the value returned in deviceID
.
If the implementation is driven primarily by a PCI
device with a PCI device ID, the low 16 bits of
deviceID
must contain that PCI device ID, and the remaining bits
must be set to zero.
Otherwise, the choice of what values to return may be dictated by operating
system or platform policies - but should uniquely identify both the device
version and any major configuration options (for example, core count in the
case of multicore devices).
Note
The same device ID should be used for all physical implementations of that device version and configuration. For example, all uses of a specific silicon IP GPU version and configuration should use the same device ID, even if those uses occur in different SoCs. |
Khronos vendor IDs which may be returned in
VkPhysicalDeviceProperties::vendorID
are:
// Provided by VK_VERSION_1_0
typedef enum VkVendorId {
VK_VENDOR_ID_VIV = 0x10001,
VK_VENDOR_ID_VSI = 0x10002,
VK_VENDOR_ID_KAZAN = 0x10003,
VK_VENDOR_ID_CODEPLAY = 0x10004,
VK_VENDOR_ID_MESA = 0x10005,
VK_VENDOR_ID_POCL = 0x10006,
} VkVendorId;
Note
Khronos vendor IDs may be allocated by vendors at any time.
Only the latest canonical versions of this Specification, of the
corresponding Only Khronos vendor IDs are given symbolic names at present. PCI vendor IDs returned by the implementation can be looked up in the PCI-SIG database. |
VK_MAX_PHYSICAL_DEVICE_NAME_SIZE
is the length in char
values of
an array containing a physical device name string, as returned in
VkPhysicalDeviceProperties::deviceName.
#define VK_MAX_PHYSICAL_DEVICE_NAME_SIZE 256U
The physical device types which may be returned in
VkPhysicalDeviceProperties::deviceType
are:
// Provided by VK_VERSION_1_0
typedef enum VkPhysicalDeviceType {
VK_PHYSICAL_DEVICE_TYPE_OTHER = 0,
VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU = 1,
VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU = 2,
VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU = 3,
VK_PHYSICAL_DEVICE_TYPE_CPU = 4,
} VkPhysicalDeviceType;
-
VK_PHYSICAL_DEVICE_TYPE_OTHER
- the device does not match any other available types. -
VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU
- the device is typically one embedded in or tightly coupled with the host. -
VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
- the device is typically a separate processor connected to the host via an interlink. -
VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU
- the device is typically a virtual node in a virtualization environment. -
VK_PHYSICAL_DEVICE_TYPE_CPU
- the device is typically running on the same processors as the host.
The physical device type is advertised for informational purposes only, and does not directly affect the operation of the system. However, the device type may correlate with other advertised properties or capabilities of the system, such as how many memory heaps there are.
To query properties of queues available on a physical device, call:
// Provided by VK_VERSION_1_0
void vkGetPhysicalDeviceQueueFamilyProperties(
VkPhysicalDevice physicalDevice,
uint32_t* pQueueFamilyPropertyCount,
VkQueueFamilyProperties* pQueueFamilyProperties);
-
physicalDevice
is the handle to the physical device whose properties will be queried. -
pQueueFamilyPropertyCount
is a pointer to an integer related to the number of queue families available or queried, as described below. -
pQueueFamilyProperties
is eitherNULL
or a pointer to an array of VkQueueFamilyProperties structures.
If pQueueFamilyProperties
is NULL
, then the number of queue families
available is returned in pQueueFamilyPropertyCount
.
Implementations must support at least one queue family.
Otherwise, pQueueFamilyPropertyCount
must point to a variable set by
the user to the number of elements in the pQueueFamilyProperties
array, and on return the variable is overwritten with the number of
structures actually written to pQueueFamilyProperties
.
If pQueueFamilyPropertyCount
is less than the number of queue families
available, at most pQueueFamilyPropertyCount
structures will be
written.
The VkQueueFamilyProperties
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkQueueFamilyProperties {
VkQueueFlags queueFlags;
uint32_t queueCount;
uint32_t timestampValidBits;
VkExtent3D minImageTransferGranularity;
} VkQueueFamilyProperties;
-
queueFlags
is a bitmask of VkQueueFlagBits indicating capabilities of the queues in this queue family. -
queueCount
is the unsigned integer count of queues in this queue family. Each queue family must support at least one queue. -
timestampValidBits
is the unsigned integer count of meaningful bits in the timestamps written via vkCmdWriteTimestamp. The valid range for the count is 36..64 bits, or a value of 0, indicating no support for timestamps. Bits outside the valid range are guaranteed to be zeros. -
minImageTransferGranularity
is the minimum granularity supported for image transfer operations on the queues in this queue family.
The value returned in minImageTransferGranularity
has a unit of
compressed texel blocks for images having a block-compressed format, and a
unit of texels otherwise.
Possible values of minImageTransferGranularity
are:
-
(0,0,0) specifies that only whole mip levels must be transferred using the image transfer operations on the corresponding queues. In this case, the following restrictions apply to all offset and extent parameters of image transfer operations:
-
The
x
,y
, andz
members of a VkOffset3D parameter must always be zero. -
The
width
,height
, anddepth
members of a VkExtent3D parameter must always match the width, height, and depth of the image subresource corresponding to the parameter, respectively.
-
-
(Ax, Ay, Az) where Ax, Ay, and Az are all integer powers of two. In this case the following restrictions apply to all image transfer operations:
-
x
,y
, andz
of a VkOffset3D parameter must be integer multiples of Ax, Ay, and Az, respectively. -
width
of a VkExtent3D parameter must be an integer multiple of Ax, or elsex
+width
must equal the width of the image subresource corresponding to the parameter. -
height
of a VkExtent3D parameter must be an integer multiple of Ay, or elsey
+height
must equal the height of the image subresource corresponding to the parameter. -
depth
of a VkExtent3D parameter must be an integer multiple of Az, or elsez
+depth
must equal the depth of the image subresource corresponding to the parameter. -
If the format of the image corresponding to the parameters is one of the block-compressed formats then for the purposes of the above calculations the granularity must be scaled up by the compressed texel block dimensions.
-
Queues supporting graphics and/or compute operations must report
(1,1,1) in minImageTransferGranularity
, meaning that there are
no additional restrictions on the granularity of image transfer operations
for these queues.
Other queues supporting image transfer operations are only required to
support whole mip level transfers, thus minImageTransferGranularity
for queues belonging to such queue families may be (0,0,0).
The Device Memory section describes memory properties queried from the physical device.
For physical device feature queries see the Features chapter.
Bits which may be set in VkQueueFamilyProperties::queueFlags
,
indicating capabilities of queues in a queue family are:
// Provided by VK_VERSION_1_0
typedef enum VkQueueFlagBits {
VK_QUEUE_GRAPHICS_BIT = 0x00000001,
VK_QUEUE_COMPUTE_BIT = 0x00000002,
VK_QUEUE_TRANSFER_BIT = 0x00000004,
VK_QUEUE_SPARSE_BINDING_BIT = 0x00000008,
} VkQueueFlagBits;
-
VK_QUEUE_GRAPHICS_BIT
specifies that queues in this queue family support graphics operations. -
VK_QUEUE_COMPUTE_BIT
specifies that queues in this queue family support compute operations. -
VK_QUEUE_TRANSFER_BIT
specifies that queues in this queue family support transfer operations. -
VK_QUEUE_SPARSE_BINDING_BIT
specifies that queues in this queue family support sparse memory management operations (see Sparse Resources). If any of the sparse resource features are enabled, then at least one queue family must support this bit.
If an implementation exposes any queue family that supports graphics operations, at least one queue family of at least one physical device exposed by the implementation must support both graphics and compute operations.
Note
All commands that are allowed on a queue that supports transfer operations
are also allowed on a queue that supports either graphics or compute
operations.
Thus, if the capabilities of a queue family include
|
For further details see Queues.
// Provided by VK_VERSION_1_0
typedef VkFlags VkQueueFlags;
VkQueueFlags
is a bitmask type for setting a mask of zero or more
VkQueueFlagBits.
5.2. Devices
Device objects represent logical connections to physical devices. Each device exposes a number of queue families each having one or more queues. All queues in a queue family support the same operations.
As described in Physical Devices, a Vulkan application will first query for all physical devices in a system. Each physical device can then be queried for its capabilities, including its queue and queue family properties. Once an acceptable physical device is identified, an application will create a corresponding logical device. The created logical device is then the primary interface to the physical device.
How to enumerate the physical devices in a system and query those physical devices for their queue family properties is described in the Physical Device Enumeration section above.
5.2.1. Device Creation
Logical devices are represented by VkDevice
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_HANDLE(VkDevice)
A logical device is created as a connection to a physical device. To create a logical device, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateDevice(
VkPhysicalDevice physicalDevice,
const VkDeviceCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkDevice* pDevice);
-
physicalDevice
must be one of the device handles returned from a call tovkEnumeratePhysicalDevices
(see Physical Device Enumeration). -
pCreateInfo
is a pointer to a VkDeviceCreateInfo structure containing information about how to create the device. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pDevice
is a pointer to a handle in which the created VkDevice is returned.
vkCreateDevice
verifies that extensions and features requested in the
ppEnabledExtensionNames
and pEnabledFeatures
members of
pCreateInfo
, respectively, are supported by the implementation.
If any requested extension is not supported, vkCreateDevice
must
return VK_ERROR_EXTENSION_NOT_PRESENT
.
If any requested feature is not supported, vkCreateDevice
must return
VK_ERROR_FEATURE_NOT_PRESENT
.
Support for extensions can be checked before creating a device by querying
vkEnumerateDeviceExtensionProperties.
Support for features can similarly be checked by querying
vkGetPhysicalDeviceFeatures.
After verifying and enabling the extensions the VkDevice
object is
created and returned to the application.
Multiple logical devices can be created from the same physical device.
Logical device creation may fail due to lack of device-specific resources
(in addition to other errors).
If that occurs, vkCreateDevice
will return
VK_ERROR_TOO_MANY_OBJECTS
.
The VkDeviceCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkDeviceCreateInfo {
VkStructureType sType;
const void* pNext;
VkDeviceCreateFlags flags;
uint32_t queueCreateInfoCount;
const VkDeviceQueueCreateInfo* pQueueCreateInfos;
uint32_t enabledLayerCount;
const char* const* ppEnabledLayerNames;
uint32_t enabledExtensionCount;
const char* const* ppEnabledExtensionNames;
const VkPhysicalDeviceFeatures* pEnabledFeatures;
} VkDeviceCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is reserved for future use. -
queueCreateInfoCount
is the unsigned integer size of thepQueueCreateInfos
array. Refer to the Queue Creation section below for further details. -
pQueueCreateInfos
is a pointer to an array of VkDeviceQueueCreateInfo structures describing the queues that are requested to be created along with the logical device. Refer to the Queue Creation section below for further details. -
enabledLayerCount
is deprecated and ignored. -
ppEnabledLayerNames
is deprecated and ignored. See Device Layer Deprecation. -
enabledExtensionCount
is the number of device extensions to enable. -
ppEnabledExtensionNames
is a pointer to an array ofenabledExtensionCount
null-terminated UTF-8 strings containing the names of extensions to enable for the created device. See the Extensions section for further details. -
pEnabledFeatures
isNULL
or a pointer to a VkPhysicalDeviceFeatures structure containing boolean indicators of all the features to be enabled. Refer to the Features section for further details.
// Provided by VK_VERSION_1_0
typedef VkFlags VkDeviceCreateFlags;
VkDeviceCreateFlags
is a bitmask type for setting a mask, but is
currently reserved for future use.
5.2.2. Device Use
The following is a high-level list of VkDevice
uses along with
references on where to find more information:
-
Creation of queues. See the Queues section below for further details.
-
Creation and tracking of various synchronization constructs. See Synchronization and Cache Control for further details.
-
Allocating, freeing, and managing memory. See Memory Allocation and Resource Creation for further details.
-
Creation and destruction of command buffers and command buffer pools. See Command Buffers for further details.
-
Creation, destruction, and management of graphics state. See Pipelines and Resource Descriptors, among others, for further details.
5.2.3. Lost Device
A logical device may become lost for a number of implementation-specific reasons, indicating that pending and future command execution may fail and cause resources and backing memory to become undefined.
Note
Typical reasons for device loss will include things like execution timing out (to prevent denial of service), power management events, platform resource management, implementation errors. Applications not adhering to valid usage may also result in device loss being reported, however this is not guaranteed. Even if device loss is reported, the system may be in an unrecoverable state, and further usage of the API is still considered invalid. |
When this happens, certain commands will return VK_ERROR_DEVICE_LOST
.
After any such event, the logical device is considered lost.
It is not possible to reset the logical device to a non-lost state, however
the lost state is specific to a logical device (VkDevice
), and the
corresponding physical device (VkPhysicalDevice
) may be otherwise
unaffected.
In some cases, the physical device may also be lost, and attempting to
create a new logical device will fail, returning VK_ERROR_DEVICE_LOST
.
This is usually indicative of a problem with the underlying implementation,
or its connection to the host.
If the physical device has not been lost, and a new logical device is
successfully created from that physical device, it must be in the non-lost
state.
Note
Whilst logical device loss may be recoverable, in the case of physical device loss, it is unlikely that an application will be able to recover unless additional, unaffected physical devices exist on the system. The error is largely informational and intended only to inform the user that a platform issue has occurred, and should be investigated further. For example, underlying hardware may have developed a fault or become physically disconnected from the rest of the system. In many cases, physical device loss may cause other more serious issues such as the operating system crashing; in which case it may not be reported via the Vulkan API. |
When a device is lost, its child objects are not implicitly destroyed and their handles are still valid. Those objects must still be destroyed before their parents or the device can be destroyed (see the Object Lifetime section). The host address space corresponding to device memory mapped using vkMapMemory is still valid, and host memory accesses to these mapped regions are still valid, but the contents are undefined. It is still legal to call any API command on the device and child objects.
Once a device is lost, command execution may fail, and commands that return
a VkResult may return VK_ERROR_DEVICE_LOST
.
Commands that do not allow runtime errors must still operate correctly for
valid usage and, if applicable, return valid data.
Commands that wait indefinitely for device execution (namely
vkDeviceWaitIdle, vkQueueWaitIdle, vkWaitForFences
with a maximum timeout
, and vkGetQueryPoolResults with the
VK_QUERY_RESULT_WAIT_BIT
bit set in flags
) must return in
finite time even in the case of a lost device, and return either
VK_SUCCESS
or VK_ERROR_DEVICE_LOST
.
For any command that may return VK_ERROR_DEVICE_LOST
, for the purpose
of determining whether a command buffer is in the
pending state, or whether resources are
considered in-use by the device, a return value of
VK_ERROR_DEVICE_LOST
is equivalent to VK_SUCCESS
.
5.2.4. Device Destruction
To destroy a device, call:
// Provided by VK_VERSION_1_0
void vkDestroyDevice(
VkDevice device,
const VkAllocationCallbacks* pAllocator);
-
device
is the logical device to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
To ensure that no work is active on the device, vkDeviceWaitIdle can
be used to gate the destruction of the device.
Prior to destroying a device, an application is responsible for
destroying/freeing any Vulkan objects that were created using that device as
the first parameter of the corresponding vkCreate*
or
vkAllocate*
command.
Note
The lifetime of each of these objects is bound by the lifetime of the
|
5.3. Queues
5.3.1. Queue Family Properties
As discussed in the Physical Device Enumeration section above, the vkGetPhysicalDeviceQueueFamilyProperties command is used to retrieve details about the queue families and queues supported by a device.
Each index in the pQueueFamilyProperties
array returned by
vkGetPhysicalDeviceQueueFamilyProperties describes a unique queue
family on that physical device.
These indices are used when creating queues, and they correspond directly
with the queueFamilyIndex
that is passed to the vkCreateDevice
command via the VkDeviceQueueCreateInfo structure as described in the
Queue Creation section below.
Grouping of queue families within a physical device is implementation-dependent.
Note
The general expectation is that a physical device groups all queues of matching capabilities into a single family. However, while implementations should do this, it is possible that a physical device may return two separate queue families with the same capabilities. |
Once an application has identified a physical device with the queue(s) that it desires to use, it will create those queues in conjunction with a logical device. This is described in the following section.
5.3.2. Queue Creation
Creating a logical device also creates the queues associated with that
device.
The queues to create are described by a set of VkDeviceQueueCreateInfo
structures that are passed to vkCreateDevice in
pQueueCreateInfos
.
Queues are represented by VkQueue
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_HANDLE(VkQueue)
The VkDeviceQueueCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkDeviceQueueCreateInfo {
VkStructureType sType;
const void* pNext;
VkDeviceQueueCreateFlags flags;
uint32_t queueFamilyIndex;
uint32_t queueCount;
const float* pQueuePriorities;
} VkDeviceQueueCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is reserved for future use. -
queueFamilyIndex
is an unsigned integer indicating the index of the queue family in which to create the queues on this device. This index corresponds to the index of an element of thepQueueFamilyProperties
array that was returned byvkGetPhysicalDeviceQueueFamilyProperties
. -
queueCount
is an unsigned integer specifying the number of queues to create in the queue family indicated byqueueFamilyIndex
, and with the behavior specified byflags
. -
pQueuePriorities
is a pointer to an array ofqueueCount
normalized floating point values, specifying priorities of work that will be submitted to each created queue. See Queue Priority for more information.
// Provided by VK_VERSION_1_0
typedef VkFlags VkDeviceQueueCreateFlags;
VkDeviceQueueCreateFlags
is a bitmask type for setting a mask, but is
currently reserved for future use.
To retrieve a handle to a VkQueue object, call:
// Provided by VK_VERSION_1_0
void vkGetDeviceQueue(
VkDevice device,
uint32_t queueFamilyIndex,
uint32_t queueIndex,
VkQueue* pQueue);
-
device
is the logical device that owns the queue. -
queueFamilyIndex
is the index of the queue family to which the queue belongs. -
queueIndex
is the index within this queue family of the queue to retrieve. -
pQueue
is a pointer to a VkQueue object that will be filled with the handle for the requested queue.
5.3.3. Queue Family Index
The queue family index is used in multiple places in Vulkan in order to tie operations to a specific family of queues.
When retrieving a handle to the queue via vkGetDeviceQueue
, the queue
family index is used to select which queue family to retrieve the
VkQueue
handle from as described in the previous section.
When creating a VkCommandPool
object (see
Command Pools), a queue family index is specified
in the VkCommandPoolCreateInfo structure.
Command buffers from this pool can only be submitted on queues
corresponding to this queue family.
When creating VkImage
(see Images) and
VkBuffer
(see Buffers) resources, a set of queue
families is included in the VkImageCreateInfo and
VkBufferCreateInfo structures to specify the queue families that can
access the resource.
When inserting a VkBufferMemoryBarrier or VkImageMemoryBarrier (see Pipeline Barriers), a source and destination queue family index is specified to allow the ownership of a buffer or image to be transferred from one queue family to another. See the Resource Sharing section for details.
5.3.4. Queue Priority
Each queue is assigned a priority, as set in the VkDeviceQueueCreateInfo structures when creating the device. The priority of each queue is a normalized floating point value between 0.0 and 1.0, which is then translated to a discrete priority level by the implementation. Higher values indicate a higher priority, with 0.0 being the lowest priority and 1.0 being the highest.
Within the same device, queues with higher priority may be allotted more processing time than queues with lower priority. The implementation makes no guarantees with regards to ordering or scheduling among queues with the same priority, other than the constraints defined by any explicit synchronization primitives. The implementation makes no guarantees with regards to queues across different devices.
An implementation may allow a higher-priority queue to starve a
lower-priority queue on the same VkDevice
until the higher-priority
queue has no further commands to execute.
The relationship of queue priorities must not cause queues on one
VkDevice
to starve queues on another VkDevice
.
No specific guarantees are made about higher priority queues receiving more processing time or better quality of service than lower priority queues.
5.3.5. Queue Submission
Work is submitted to a queue via queue submission commands such as vkQueueSubmit. Queue submission commands define a set of queue operations to be executed by the underlying physical device, including synchronization with semaphores and fences.
Submission commands take as parameters a target queue, zero or more batches of work, and an optional fence to signal upon completion. Each batch consists of three distinct parts:
-
Zero or more semaphores to wait on before execution of the rest of the batch.
-
If present, these describe a semaphore wait operation.
-
-
Zero or more work items to execute.
-
If present, these describe a queue operation matching the work described.
-
-
Zero or more semaphores to signal upon completion of the work items.
-
If present, these describe a semaphore signal operation.
-
If a fence is present in a queue submission, it describes a fence signal operation.
All work described by a queue submission command must be submitted to the queue before the command returns.
Sparse Memory Binding
In Vulkan it is possible to sparsely bind memory to buffers and images as
described in the Sparse Resource chapter.
Sparse memory binding is a queue operation.
A queue whose flags include the VK_QUEUE_SPARSE_BINDING_BIT
must be
able to support the mapping of a virtual address to a physical address on
the device.
This causes an update to the page table mappings on the device.
This update must be synchronized on a queue to avoid corrupting page table
mappings during execution of graphics commands.
By binding the sparse memory resources on queues, all commands that are
dependent on the updated bindings are synchronized to only execute after the
binding is updated.
See the Synchronization and Cache Control chapter for
how this synchronization is accomplished.
6. Command Buffers
Command buffers are objects used to record commands which can be subsequently submitted to a device queue for execution. There are two levels of command buffers - primary command buffers, which can execute secondary command buffers, and which are submitted to queues, and secondary command buffers, which can be executed by primary command buffers, and which are not directly submitted to queues.
Command buffers are represented by VkCommandBuffer
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_HANDLE(VkCommandBuffer)
Recorded commands include commands to bind pipelines and descriptor sets to the command buffer, commands to modify dynamic state, commands to draw (for graphics rendering), commands to dispatch (for compute), commands to execute secondary command buffers (for primary command buffers only), commands to copy buffers and images, and other commands.
Each command buffer manages state independently of other command buffers. There is no inheritance of state across primary and secondary command buffers, or between secondary command buffers. When a command buffer begins recording, all state in that command buffer is undefined. When secondary command buffer(s) are recorded to execute on a primary command buffer, the secondary command buffer inherits no state from the primary command buffer, and all state of the primary command buffer is undefined after an execute secondary command buffer command is recorded. There is one exception to this rule - if the primary command buffer is inside a render pass instance, then the render pass and subpass state is not disturbed by executing secondary command buffers. For state dependent commands (such as draws and dispatches), any state consumed by those commands must not be undefined.
Unless otherwise specified, and without explicit synchronization, the various commands submitted to a queue via command buffers may execute in arbitrary order relative to each other, and/or concurrently. Also, the memory side effects of those commands may not be directly visible to other commands without explicit memory dependencies. This is true within a command buffer, and across command buffers submitted to a given queue. See the synchronization chapter for information on implicit and explicit synchronization between commands.
6.1. Command Buffer Lifecycle
Each command buffer is always in one of the following states:
- Initial
-
When a command buffer is allocated, it is in the initial state. Some commands are able to reset a command buffer (or a set of command buffers) back to this state from any of the executable, recording or invalid state. Command buffers in the initial state can only be moved to the recording state, or freed.
- Recording
-
vkBeginCommandBuffer changes the state of a command buffer from the initial state to the recording state. Once a command buffer is in the recording state,
vkCmd*
commands can be used to record to the command buffer. - Executable
-
vkEndCommandBuffer ends the recording of a command buffer, and moves it from the recording state to the executable state. Executable command buffers can be submitted, reset, or recorded to another command buffer.
- Pending
-
Queue submission of a command buffer changes the state of a command buffer from the executable state to the pending state. Whilst in the pending state, applications must not attempt to modify the command buffer in any way - as the device may be processing the commands recorded to it. Once execution of a command buffer completes, the command buffer either reverts back to the executable state, or if it was recorded with
VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT
, it moves to the invalid state. A synchronization command should be used to detect when this occurs. - Invalid
-
Some operations, such as modifying or deleting a resource that was used in a command recorded to a command buffer, will transition the state of that command buffer into the invalid state. Command buffers in the invalid state can only be reset or freed.
Any given command that operates on a command buffer has its own requirements on what state a command buffer must be in, which are detailed in the valid usage constraints for that command.
Resetting a command buffer is an operation that discards any previously recorded commands and puts a command buffer in the initial state. Resetting occurs as a result of vkResetCommandBuffer or vkResetCommandPool, or as part of vkBeginCommandBuffer (which additionally puts the command buffer in the recording state).
Secondary command buffers can be recorded to a primary command buffer via vkCmdExecuteCommands. This partially ties the lifecycle of the two command buffers together - if the primary is submitted to a queue, both the primary and any secondaries recorded to it move to the pending state. Once execution of the primary completes, so it does for any secondary recorded within it. After all executions of each command buffer complete, they each move to their appropriate completion state (either to the executable state or the invalid state, as specified above).
If a secondary moves to the invalid state or the initial state, then all primary buffers it is recorded in move to the invalid state. A primary moving to any other state does not affect the state of a secondary recorded in it.
Note
Resetting or freeing a primary command buffer removes the lifecycle linkage to all secondary command buffers that were recorded into it. |
6.2. Command Pools
Command pools are opaque objects that command buffer memory is allocated from, and which allow the implementation to amortize the cost of resource creation across multiple command buffers. Command pools are externally synchronized, meaning that a command pool must not be used concurrently in multiple threads. That includes use via recording commands on any command buffers allocated from the pool, as well as operations that allocate, free, and reset command buffers or the pool itself.
Command pools are represented by VkCommandPool
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkCommandPool)
To create a command pool, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateCommandPool(
VkDevice device,
const VkCommandPoolCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkCommandPool* pCommandPool);
-
device
is the logical device that creates the command pool. -
pCreateInfo
is a pointer to a VkCommandPoolCreateInfo structure specifying the state of the command pool object. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pCommandPool
is a pointer to a VkCommandPool handle in which the created pool is returned.
The VkCommandPoolCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkCommandPoolCreateInfo {
VkStructureType sType;
const void* pNext;
VkCommandPoolCreateFlags flags;
uint32_t queueFamilyIndex;
} VkCommandPoolCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is a bitmask of VkCommandPoolCreateFlagBits indicating usage behavior for the pool and command buffers allocated from it. -
queueFamilyIndex
designates a queue family as described in section Queue Family Properties. All command buffers allocated from this command pool must be submitted on queues from the same queue family.
Bits which can be set in VkCommandPoolCreateInfo::flags
,
specifying usage behavior for a command pool, are:
// Provided by VK_VERSION_1_0
typedef enum VkCommandPoolCreateFlagBits {
VK_COMMAND_POOL_CREATE_TRANSIENT_BIT = 0x00000001,
VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT = 0x00000002,
} VkCommandPoolCreateFlagBits;
-
VK_COMMAND_POOL_CREATE_TRANSIENT_BIT
specifies that command buffers allocated from the pool will be short-lived, meaning that they will be reset or freed in a relatively short timeframe. This flag may be used by the implementation to control memory allocation behavior within the pool. -
VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT
allows any command buffer allocated from a pool to be individually reset to the initial state; either by calling vkResetCommandBuffer, or via the implicit reset when calling vkBeginCommandBuffer. If this flag is not set on a pool, thenvkResetCommandBuffer
must not be called for any command buffer allocated from that pool.
// Provided by VK_VERSION_1_0
typedef VkFlags VkCommandPoolCreateFlags;
VkCommandPoolCreateFlags
is a bitmask type for setting a mask of zero
or more VkCommandPoolCreateFlagBits.
To reset a command pool, call:
// Provided by VK_VERSION_1_0
VkResult vkResetCommandPool(
VkDevice device,
VkCommandPool commandPool,
VkCommandPoolResetFlags flags);
-
device
is the logical device that owns the command pool. -
commandPool
is the command pool to reset. -
flags
is a bitmask of VkCommandPoolResetFlagBits controlling the reset operation.
Resetting a command pool recycles all of the resources from all of the command buffers allocated from the command pool back to the command pool. All command buffers that have been allocated from the command pool are put in the initial state.
Any primary command buffer allocated from another VkCommandPool that
is in the recording or executable state and
has a secondary command buffer allocated from commandPool
recorded
into it, becomes invalid.
Bits which can be set in vkResetCommandPool::flags
, controlling
the reset operation, are:
// Provided by VK_VERSION_1_0
typedef enum VkCommandPoolResetFlagBits {
VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT = 0x00000001,
} VkCommandPoolResetFlagBits;
-
VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT
specifies that resetting a command pool recycles all of the resources from the command pool back to the system.
// Provided by VK_VERSION_1_0
typedef VkFlags VkCommandPoolResetFlags;
VkCommandPoolResetFlags
is a bitmask type for setting a mask of zero
or more VkCommandPoolResetFlagBits.
To destroy a command pool, call:
// Provided by VK_VERSION_1_0
void vkDestroyCommandPool(
VkDevice device,
VkCommandPool commandPool,
const VkAllocationCallbacks* pAllocator);
-
device
is the logical device that destroys the command pool. -
commandPool
is the handle of the command pool to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
When a pool is destroyed, all command buffers allocated from the pool are freed.
Any primary command buffer allocated from another VkCommandPool that
is in the recording or executable state and
has a secondary command buffer allocated from commandPool
recorded
into it, becomes invalid.
6.3. Command Buffer Allocation and Management
To allocate command buffers, call:
// Provided by VK_VERSION_1_0
VkResult vkAllocateCommandBuffers(
VkDevice device,
const VkCommandBufferAllocateInfo* pAllocateInfo,
VkCommandBuffer* pCommandBuffers);
-
device
is the logical device that owns the command pool. -
pAllocateInfo
is a pointer to aVkCommandBufferAllocateInfo
structure describing parameters of the allocation. -
pCommandBuffers
is a pointer to an array of VkCommandBuffer handles in which the resulting command buffer objects are returned. The array must be at least the length specified by thecommandBufferCount
member ofpAllocateInfo
. Each allocated command buffer begins in the initial state.
When command buffers are first allocated, they are in the initial state.
The VkCommandBufferAllocateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkCommandBufferAllocateInfo {
VkStructureType sType;
const void* pNext;
VkCommandPool commandPool;
VkCommandBufferLevel level;
uint32_t commandBufferCount;
} VkCommandBufferAllocateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
commandPool
is the command pool from which the command buffers are allocated. -
level
is a VkCommandBufferLevel value specifying the command buffer level. -
commandBufferCount
is the number of command buffers to allocate from the pool.
Possible values of VkCommandBufferAllocateInfo::level
,
specifying the command buffer level, are:
// Provided by VK_VERSION_1_0
typedef enum VkCommandBufferLevel {
VK_COMMAND_BUFFER_LEVEL_PRIMARY = 0,
VK_COMMAND_BUFFER_LEVEL_SECONDARY = 1,
} VkCommandBufferLevel;
-
VK_COMMAND_BUFFER_LEVEL_PRIMARY
specifies a primary command buffer. -
VK_COMMAND_BUFFER_LEVEL_SECONDARY
specifies a secondary command buffer.
To reset a command buffer, call:
// Provided by VK_VERSION_1_0
VkResult vkResetCommandBuffer(
VkCommandBuffer commandBuffer,
VkCommandBufferResetFlags flags);
-
commandBuffer
is the command buffer to reset. The command buffer can be in any state other than pending, and is moved into the initial state. -
flags
is a bitmask of VkCommandBufferResetFlagBits controlling the reset operation.
Any primary command buffer that is in the recording or executable state and has commandBuffer
recorded into
it, becomes invalid.
Bits which can be set in vkResetCommandBuffer::flags
,
controlling the reset operation, are:
// Provided by VK_VERSION_1_0
typedef enum VkCommandBufferResetFlagBits {
VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT = 0x00000001,
} VkCommandBufferResetFlagBits;
-
VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT
specifies that most or all memory resources currently owned by the command buffer should be returned to the parent command pool. If this flag is not set, then the command buffer may hold onto memory resources and reuse them when recording commands.commandBuffer
is moved to the initial state.
// Provided by VK_VERSION_1_0
typedef VkFlags VkCommandBufferResetFlags;
VkCommandBufferResetFlags
is a bitmask type for setting a mask of zero
or more VkCommandBufferResetFlagBits.
To free command buffers, call:
// Provided by VK_VERSION_1_0
void vkFreeCommandBuffers(
VkDevice device,
VkCommandPool commandPool,
uint32_t commandBufferCount,
const VkCommandBuffer* pCommandBuffers);
-
device
is the logical device that owns the command pool. -
commandPool
is the command pool from which the command buffers were allocated. -
commandBufferCount
is the length of thepCommandBuffers
array. -
pCommandBuffers
is a pointer to an array of handles of command buffers to free.
Any primary command buffer that is in the recording or executable state and has any element of pCommandBuffers
recorded into it, becomes invalid.
6.4. Command Buffer Recording
To begin recording a command buffer, call:
// Provided by VK_VERSION_1_0
VkResult vkBeginCommandBuffer(
VkCommandBuffer commandBuffer,
const VkCommandBufferBeginInfo* pBeginInfo);
-
commandBuffer
is the handle of the command buffer which is to be put in the recording state. -
pBeginInfo
is a pointer to a VkCommandBufferBeginInfo structure defining additional information about how the command buffer begins recording.
The VkCommandBufferBeginInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkCommandBufferBeginInfo {
VkStructureType sType;
const void* pNext;
VkCommandBufferUsageFlags flags;
const VkCommandBufferInheritanceInfo* pInheritanceInfo;
} VkCommandBufferBeginInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is a bitmask of VkCommandBufferUsageFlagBits specifying usage behavior for the command buffer. -
pInheritanceInfo
is a pointer to aVkCommandBufferInheritanceInfo
structure, used ifcommandBuffer
is a secondary command buffer. If this is a primary command buffer, then this value is ignored.
Bits which can be set in VkCommandBufferBeginInfo::flags
,
specifying usage behavior for a command buffer, are:
// Provided by VK_VERSION_1_0
typedef enum VkCommandBufferUsageFlagBits {
VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT = 0x00000001,
VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT = 0x00000002,
VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT = 0x00000004,
} VkCommandBufferUsageFlagBits;
-
VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT
specifies that each recording of the command buffer will only be submitted once, and the command buffer will be reset and recorded again between each submission. -
VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT
specifies that a secondary command buffer is considered to be entirely inside a render pass. If this is a primary command buffer, then this bit is ignored. -
VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT
specifies that a command buffer can be resubmitted to a queue while it is in the pending state, and recorded into multiple primary command buffers.
// Provided by VK_VERSION_1_0
typedef VkFlags VkCommandBufferUsageFlags;
VkCommandBufferUsageFlags
is a bitmask type for setting a mask of zero
or more VkCommandBufferUsageFlagBits.
If the command buffer is a secondary command buffer, then the
VkCommandBufferInheritanceInfo
structure defines any state that will
be inherited from the primary command buffer:
// Provided by VK_VERSION_1_0
typedef struct VkCommandBufferInheritanceInfo {
VkStructureType sType;
const void* pNext;
VkRenderPass renderPass;
uint32_t subpass;
VkFramebuffer framebuffer;
VkBool32 occlusionQueryEnable;
VkQueryControlFlags queryFlags;
VkQueryPipelineStatisticFlags pipelineStatistics;
} VkCommandBufferInheritanceInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
renderPass
is a VkRenderPass object defining which render passes theVkCommandBuffer
will be compatible with and can be executed within. -
subpass
is the index of the subpass within the render pass instance that theVkCommandBuffer
will be executed within. -
framebuffer
can refer to the VkFramebuffer object that theVkCommandBuffer
will be rendering to if it is executed within a render pass instance. It can be VK_NULL_HANDLE if the framebuffer is not known.NoteSpecifying the exact framebuffer that the secondary command buffer will be executed with may result in better performance at command buffer execution time.
-
occlusionQueryEnable
specifies whether the command buffer can be executed while an occlusion query is active in the primary command buffer. If this isVK_TRUE
, then this command buffer can be executed whether the primary command buffer has an occlusion query active or not. If this isVK_FALSE
, then the primary command buffer must not have an occlusion query active. -
queryFlags
specifies the query flags that can be used by an active occlusion query in the primary command buffer when this secondary command buffer is executed. If this value includes theVK_QUERY_CONTROL_PRECISE_BIT
bit, then the active query can return boolean results or actual sample counts. If this bit is not set, then the active query must not use theVK_QUERY_CONTROL_PRECISE_BIT
bit. -
pipelineStatistics
is a bitmask of VkQueryPipelineStatisticFlagBits specifying the set of pipeline statistics that can be counted by an active query in the primary command buffer when this secondary command buffer is executed. If this value includes a given bit, then this command buffer can be executed whether the primary command buffer has a pipeline statistics query active that includes this bit or not. If this value excludes a given bit, then the active pipeline statistics query must not be from a query pool that counts that statistic.
If the VkCommandBuffer will not be executed within a render pass
instance,
renderPass
, subpass
, and framebuffer
are ignored.
Note
On some implementations, not using the
|
If a command buffer is in the invalid, or
executable state, and the command buffer was allocated from a command pool
with the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT
flag set,
then vkBeginCommandBuffer
implicitly resets the command buffer,
behaving as if vkResetCommandBuffer
had been called with
VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT
not set.
After the implicit reset, commandBuffer
is moved to the
recording state.
Once recording starts, an application records a sequence of commands
(vkCmd*
) to set state in the command buffer, draw, dispatch, and other
commands.
To complete recording of a command buffer, call:
// Provided by VK_VERSION_1_0
VkResult vkEndCommandBuffer(
VkCommandBuffer commandBuffer);
-
commandBuffer
is the command buffer to complete recording.
If there was an error during recording, the application will be notified by
an unsuccessful return code returned by vkEndCommandBuffer
.
If the application wishes to further use the command buffer, the command
buffer must be reset.
The command buffer must have been in the recording state, and is moved to the executable state.
When a command buffer is in the executable state, it can be submitted to a queue for execution.
6.5. Command Buffer Submission
Note
Submission can be a high overhead operation, and applications should
attempt to batch work together into as few calls to |
To submit command buffers to a queue, call:
// Provided by VK_VERSION_1_0
VkResult vkQueueSubmit(
VkQueue queue,
uint32_t submitCount,
const VkSubmitInfo* pSubmits,
VkFence fence);
-
queue
is the queue that the command buffers will be submitted to. -
submitCount
is the number of elements in thepSubmits
array. -
pSubmits
is a pointer to an array of VkSubmitInfo structures, each specifying a command buffer submission batch. -
fence
is an optional handle to a fence to be signaled once all submitted command buffers have completed execution. Iffence
is not VK_NULL_HANDLE, it defines a fence signal operation.
vkQueueSubmit
is a queue submission
command, with each batch defined by an element of pSubmits
.
Batches begin execution in the order they appear in pSubmits
, but may
complete out of order.
Fence and semaphore operations submitted with vkQueueSubmit have additional ordering constraints compared to other submission commands, with dependencies involving previous and subsequent queue operations. Information about these additional constraints can be found in the semaphore and fence sections of the synchronization chapter.
Details on the interaction of pWaitDstStageMask
with synchronization
are described in the semaphore wait
operation section of the synchronization chapter.
The order that batches appear in pSubmits
is used to determine
submission order, and thus all the
implicit ordering guarantees that respect it.
Other than these implicit ordering guarantees and any explicit synchronization primitives, these batches may overlap or
otherwise execute out of order.
If any command buffer submitted to this queue is in the
executable state, it is moved to the
pending state.
Once execution of all submissions of a command buffer complete, it moves
from the pending state, back to the
executable state.
If a command buffer was recorded with the
VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT
flag, it instead moves to
the invalid state.
If vkQueueSubmit
fails, it may return
VK_ERROR_OUT_OF_HOST_MEMORY
or VK_ERROR_OUT_OF_DEVICE_MEMORY
.
If it does, the implementation must ensure that the state and contents of
any resources or synchronization primitives referenced by the submitted
command buffers and any semaphores referenced by pSubmits
is
unaffected by the call or its failure.
If vkQueueSubmit
fails in such a way that the implementation is unable
to make that guarantee, the implementation must return
VK_ERROR_DEVICE_LOST
.
See Lost Device.
The VkSubmitInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkSubmitInfo {
VkStructureType sType;
const void* pNext;
uint32_t waitSemaphoreCount;
const VkSemaphore* pWaitSemaphores;
const VkPipelineStageFlags* pWaitDstStageMask;
uint32_t commandBufferCount;
const VkCommandBuffer* pCommandBuffers;
uint32_t signalSemaphoreCount;
const VkSemaphore* pSignalSemaphores;
} VkSubmitInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
waitSemaphoreCount
is the number of semaphores upon which to wait before executing the command buffers for the batch. -
pWaitSemaphores
is a pointer to an array of VkSemaphore handles upon which to wait before the command buffers for this batch begin execution. If semaphores to wait on are provided, they define a semaphore wait operation. -
pWaitDstStageMask
is a pointer to an array of pipeline stages at which each corresponding semaphore wait will occur. -
commandBufferCount
is the number of command buffers to execute in the batch. -
pCommandBuffers
is a pointer to an array of VkCommandBuffer handles to execute in the batch. -
signalSemaphoreCount
is the number of semaphores to be signaled once the commands specified inpCommandBuffers
have completed execution. -
pSignalSemaphores
is a pointer to an array of VkSemaphore handles which will be signaled when the command buffers for this batch have completed execution. If semaphores to be signaled are provided, they define a semaphore signal operation.
The order that command buffers appear in pCommandBuffers
is used to
determine submission order, and thus
all the implicit ordering guarantees that
respect it.
Other than these implicit ordering guarantees and any explicit synchronization primitives, these command buffers may overlap or
otherwise execute out of order.
6.6. Queue Forward Progress
When using binary semaphores, the application must ensure that command
buffer submissions will be able to complete without any subsequent
operations by the application on any queue.
After any call to vkQueueSubmit
(or other queue operation), for every
queued wait on a semaphore
there must be a prior signal of that semaphore that will not be consumed by
a different wait on the semaphore.
Command buffers in the submission can include vkCmdWaitEvents
commands that wait on events that will not be signaled by earlier commands
in the queue.
Such events must be signaled by the application using vkSetEvent, and
the vkCmdWaitEvents
commands that wait upon them must not be inside a
render pass instance.
The event must be set before the vkCmdWaitEvents command is executed.
Note
Implementations may have some tolerance for waiting on events to be set, but this is defined outside of the scope of Vulkan. |
6.7. Secondary Command Buffer Execution
A secondary command buffer must not be directly submitted to a queue. Instead, secondary command buffers are recorded to execute as part of a primary command buffer with the command:
// Provided by VK_VERSION_1_0
void vkCmdExecuteCommands(
VkCommandBuffer commandBuffer,
uint32_t commandBufferCount,
const VkCommandBuffer* pCommandBuffers);
-
commandBuffer
is a handle to a primary command buffer that the secondary command buffers are executed in. -
commandBufferCount
is the length of thepCommandBuffers
array. -
pCommandBuffers
is a pointer to an array ofcommandBufferCount
secondary command buffer handles, which are recorded to execute in the primary command buffer in the order they are listed in the array.
If any element of pCommandBuffers
was not recorded with the
VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT
flag, and it was recorded
into any other primary command buffer which is currently in the
executable or recording state, that primary
command buffer becomes invalid.
7. Synchronization and Cache Control
Synchronization of access to resources is primarily the responsibility of the application in Vulkan. The order of execution of commands with respect to the host and other commands on the device has few implicit guarantees, and needs to be explicitly specified. Memory caches and other optimizations are also explicitly managed, requiring that the flow of data through the system is largely under application control.
Whilst some implicit guarantees exist between commands, five explicit synchronization mechanisms are exposed by Vulkan:
- Fences
-
Fences can be used to communicate to the host that execution of some task on the device has completed.
- Semaphores
-
Semaphores can be used to control resource access across multiple queues.
- Events
-
Events provide a fine-grained synchronization primitive which can be signaled either within a command buffer or by the host, and can be waited upon within a command buffer or queried on the host.
- Pipeline Barriers
-
Pipeline barriers also provide synchronization control within a command buffer, but at a single point, rather than with separate signal and wait operations.
- Render Passes
-
Render passes provide a useful synchronization framework for most rendering tasks, built upon the concepts in this chapter. Many cases that would otherwise need an application to use other synchronization primitives can be expressed more efficiently as part of a render pass.
7.1. Execution and Memory Dependencies
An operation is an arbitrary amount of work to be executed on the host, a device, or an external entity such as a presentation engine. Synchronization commands introduce explicit execution dependencies, and memory dependencies between two sets of operations defined by the command’s two synchronization scopes.
The synchronization scopes define which other operations a synchronization command is able to create execution dependencies with. Any type of operation that is not in a synchronization command’s synchronization scopes will not be included in the resulting dependency. For example, for many synchronization commands, the synchronization scopes can be limited to just operations executing in specific pipeline stages, which allows other pipeline stages to be excluded from a dependency. Other scoping options are possible, depending on the particular command.
An execution dependency is a guarantee that for two sets of operations, the first set must happen-before the second set. If an operation happens-before another operation, then the first operation must complete before the second operation is initiated. More precisely:
-
Let A and B be separate sets of operations.
-
Let S be a synchronization command.
-
Let AS and BS be the synchronization scopes of S.
-
Let A' be the intersection of sets A and AS.
-
Let B' be the intersection of sets B and BS.
-
Submitting A, S and B for execution, in that order, will result in execution dependency E between A' and B'.
-
Execution dependency E guarantees that A' happens-before B'.
An execution dependency chain is a sequence of execution dependencies that form a happens-before relation between the first dependency’s A' and the final dependency’s B'. For each consecutive pair of execution dependencies, a chain exists if the intersection of BS in the first dependency and AS in the second dependency is not an empty set. The formation of a single execution dependency from an execution dependency chain can be described by substituting the following in the description of execution dependencies:
-
Let S be a set of synchronization commands that generate an execution dependency chain.
-
Let AS be the first synchronization scope of the first command in S.
-
Let BS be the second synchronization scope of the last command in S.
Execution dependencies alone are not sufficient to guarantee that values resulting from writes in one set of operations can be read from another set of operations.
Three additional types of operations are used to control memory access. Availability operations cause the values generated by specified memory write accesses to become available to a memory domain for future access. Any available value remains available until a subsequent write to the same memory location occurs (whether it is made available or not) or the memory is freed. Memory domain operations cause writes that are available to a source memory domain to become available to a destination memory domain (an example of this is making writes available to the host domain available to the device domain). Visibility operations cause values available to a memory domain to become visible to specified memory accesses.
A memory dependency is an execution dependency which includes availability and visibility operations such that:
-
The first set of operations happens-before the availability operation.
-
The availability operation happens-before the visibility operation.
-
The visibility operation happens-before the second set of operations.
Once written values are made visible to a particular type of memory access, they can be read or written by that type of memory access. Most synchronization commands in Vulkan define a memory dependency.
The specific memory accesses that are made available and visible are defined by the access scopes of a memory dependency. Any type of access that is in a memory dependency’s first access scope and occurs in A' is made available. Any type of access that is in a memory dependency’s second access scope and occurs in B' has any available writes made visible to it. Any type of operation that is not in a synchronization command’s access scopes will not be included in the resulting dependency.
A memory dependency enforces availability and visibility of memory accesses and execution order between two sets of operations. Adding to the description of execution dependency chains:
-
Let a be the set of memory accesses performed by A'.
-
Let b be the set of memory accesses performed by B'.
-
Let aS be the first access scope of the first command in S.
-
Let bS be the second access scope of the last command in S.
-
Let a' be the intersection of sets a and aS.
-
Let b' be the intersection of sets b and bS.
-
Submitting A, S and B for execution, in that order, will result in a memory dependency m between A' and B'.
-
Memory dependency m guarantees that:
-
Memory writes in a' are made available.
-
Available memory writes, including those from a', are made visible to b'.
-
Note
Execution and memory dependencies are used to solve data hazards, i.e. to ensure that read and write operations occur in a well-defined order. Write-after-read hazards can be solved with just an execution dependency, but read-after-write and write-after-write hazards need appropriate memory dependencies to be included between them. If an application does not include dependencies to solve these hazards, the results and execution orders of memory accesses are undefined. |
7.1.1. Image Layout Transitions
Image subresources can be transitioned from one layout to another as part of a memory dependency (e.g. by using an image memory barrier). When a layout transition is specified in a memory dependency, it happens-after the availability operations in the memory dependency, and happens-before the visibility operations. Image layout transitions may perform read and write accesses on all memory bound to the image subresource range, so applications must ensure that all memory writes have been made available before a layout transition is executed. Available memory is automatically made visible to a layout transition, and writes performed by a layout transition are automatically made available.
Layout transitions always apply to a particular image subresource range, and
specify both an old layout and new layout.
The old layout must either be VK_IMAGE_LAYOUT_UNDEFINED
, or match the
current layout of the image subresource range.
If the old layout matches the current layout of the image subresource range,
the transition preserves the contents of that range.
If the old layout is VK_IMAGE_LAYOUT_UNDEFINED
, the contents of that
range may be discarded.
Note
Applications must ensure that layout transitions happen-after all operations accessing the image with the old layout, and happen-before any operations that will access the image with the new layout. Layout transitions are potentially read/write operations, so not defining appropriate memory dependencies to guarantee this will result in a data race. |
Image layout transitions interact with memory aliasing.
Layout transitions that are performed via image memory barriers execute in their entirety in submission order, relative to other image layout transitions submitted to the same queue, including those performed by render passes. In effect there is an implicit execution dependency from each such layout transition to all layout transitions previously submitted to the same queue.
7.1.2. Pipeline Stages
The work performed by an action or synchronization command consists of multiple operations, which are performed as a sequence of logically independent steps known as pipeline stages. The exact pipeline stages executed depend on the particular command that is used, and current command buffer state when the command was recorded. Drawing commands, dispatching commands, copy commands, clear commands, and synchronization commands all execute in different sets of pipeline stages. Synchronization commands do not execute in a defined pipeline stage.
Note
Operations performed by synchronization commands (e.g. availability and visibility operations) are not executed by a defined pipeline stage. However other commands can still synchronize with them by using the synchronization scopes to create a dependency chain. |
Execution of operations across pipeline stages must adhere to implicit ordering guarantees, particularly including pipeline stage order. Otherwise, execution across pipeline stages may overlap or execute out of order with regards to other stages, unless otherwise enforced by an execution dependency.
Several of the synchronization commands include pipeline stage parameters, restricting the synchronization scopes for that command to just those stages. This allows fine grained control over the exact execution dependencies and accesses performed by action commands. Implementations should use these pipeline stages to avoid unnecessary stalls or cache flushing.
Bits which can be set in a VkPipelineStageFlags mask, specifying stages of execution, are:
// Provided by VK_VERSION_1_0
typedef enum VkPipelineStageFlagBits {
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT = 0x00000001,
VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT = 0x00000002,
VK_PIPELINE_STAGE_VERTEX_INPUT_BIT = 0x00000004,
VK_PIPELINE_STAGE_VERTEX_SHADER_BIT = 0x00000008,
VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT = 0x00000010,
VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT = 0x00000020,
VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT = 0x00000040,
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT = 0x00000080,
VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT = 0x00000100,
VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT = 0x00000200,
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT = 0x00000400,
VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT = 0x00000800,
VK_PIPELINE_STAGE_TRANSFER_BIT = 0x00001000,
VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT = 0x00002000,
VK_PIPELINE_STAGE_HOST_BIT = 0x00004000,
VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT = 0x00008000,
VK_PIPELINE_STAGE_ALL_COMMANDS_BIT = 0x00010000,
} VkPipelineStageFlagBits;
-
VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT
specifies the stage of the pipeline whereVkDrawIndirect*
/VkDispatchIndirect*
/VkTraceRaysIndirect*
data structures are consumed. -
VK_PIPELINE_STAGE_VERTEX_INPUT_BIT
specifies the stage of the pipeline where vertex and index buffers are consumed. -
VK_PIPELINE_STAGE_VERTEX_SHADER_BIT
specifies the vertex shader stage. -
VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT
specifies the tessellation control shader stage. -
VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT
specifies the tessellation evaluation shader stage. -
VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT
specifies the geometry shader stage. -
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT
specifies the fragment shader stage. -
VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT
specifies the stage of the pipeline where early fragment tests (depth and stencil tests before fragment shading) are performed. This stage also includes subpass load operations for framebuffer attachments with a depth/stencil format. -
VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT
specifies the stage of the pipeline where late fragment tests (depth and stencil tests after fragment shading) are performed. This stage also includes subpass store operations for framebuffer attachments with a depth/stencil format. -
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
specifies the stage of the pipeline after blending where the final color values are output from the pipeline. This stage also includes subpass load and store operations and multisample resolve operations for framebuffer attachments with a color format. -
VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT
specifies the execution of a compute shader. -
VK_PIPELINE_STAGE_TRANSFER_BIT
specifies the following commands:-
All copy commands, including vkCmdCopyQueryPoolResults
-
All clear commands, with the exception of vkCmdClearAttachments
-
-
VK_PIPELINE_STAGE_HOST_BIT
specifies a pseudo-stage indicating execution on the host of reads/writes of device memory. This stage is not invoked by any commands recorded in a command buffer. -
VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT
specifies the execution of all graphics pipeline stages, and is equivalent to the logical OR of:-
VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT
-
VK_PIPELINE_STAGE_VERTEX_INPUT_BIT
-
VK_PIPELINE_STAGE_VERTEX_SHADER_BIT
-
VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT
-
VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT
-
VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT
-
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT
-
VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT
-
VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT
-
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
-
-
VK_PIPELINE_STAGE_ALL_COMMANDS_BIT
specifies all operations performed by all commands supported on the queue it is used with. -
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT
is equivalent toVK_PIPELINE_STAGE_ALL_COMMANDS_BIT
with VkAccessFlags set to0
when specified in the second synchronization scope, but specifies no stage of execution when specified in the first scope. -
VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT
is equivalent toVK_PIPELINE_STAGE_ALL_COMMANDS_BIT
with VkAccessFlags set to0
when specified in the first synchronization scope, but specifies no stage of execution when specified in the second scope.
// Provided by VK_VERSION_1_0
typedef VkFlags VkPipelineStageFlags;
VkPipelineStageFlags
is a bitmask type for setting a mask of zero or
more VkPipelineStageFlagBits.
If a synchronization command includes a source stage mask, its first synchronization scope only includes execution of the pipeline stages specified in that mask, and its first access scope only includes memory accesses performed by pipeline stages specified in that mask.
If a synchronization command includes a destination stage mask, its second synchronization scope only includes execution of the pipeline stages specified in that mask, and its second access scope only includes memory access performed by pipeline stages specified in that mask.
Note
Including a particular pipeline stage in the first synchronization scope of a command implicitly includes logically earlier pipeline stages in the synchronization scope. Similarly, the second synchronization scope includes logically later pipeline stages. However, note that access scopes are not affected in this way - only the precise stages specified are considered part of each access scope. |
Certain pipeline stages are only available on queues that support a particular set of operations. The following table lists, for each pipeline stage flag, which queue capability flag must be supported by the queue. When multiple flags are enumerated in the second column of the table, it means that the pipeline stage is supported on the queue if it supports any of the listed capability flags. For further details on queue capabilities see Physical Device Enumeration and Queues.
Pipeline stage flag | Required queue capability flag |
---|---|
|
None required |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
None required |
|
None required |
|
|
|
None required |
Pipeline stages that execute as a result of a command logically complete execution in a specific order, such that completion of a logically later pipeline stage must not happen-before completion of a logically earlier stage. This means that including any stage in the source stage mask for a particular synchronization command also implies that any logically earlier stages are included in AS for that command.
Similarly, initiation of a logically earlier pipeline stage must not happen-after initiation of a logically later pipeline stage. Including any given stage in the destination stage mask for a particular synchronization command also implies that any logically later stages are included in BS for that command.
Note
Implementations may not support synchronization at every pipeline stage for every synchronization operation. If a pipeline stage that an implementation does not support synchronization for appears in a source stage mask, it may substitute any logically later stage in its place for the first synchronization scope. If a pipeline stage that an implementation does not support synchronization for appears in a destination stage mask, it may substitute any logically earlier stage in its place for the second synchronization scope. For example, if an implementation is unable to signal an event immediately after vertex shader execution is complete, it may instead signal the event after color attachment output has completed. If an implementation makes such a substitution, it must not affect the semantics of execution or memory dependencies or image and buffer memory barriers. |
Graphics pipelines are executable on queues
supporting VK_QUEUE_GRAPHICS_BIT
.
Stages executed by graphics pipelines can only be specified in commands
recorded for queues supporting VK_QUEUE_GRAPHICS_BIT
.
The graphics pipeline executes the following stages, with the logical ordering of the stages matching the order specified here:
-
VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT
-
VK_PIPELINE_STAGE_VERTEX_INPUT_BIT
-
VK_PIPELINE_STAGE_VERTEX_SHADER_BIT
-
VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT
-
VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT
-
VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT
-
VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT
-
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT
-
VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT
-
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
For the compute pipeline, the following stages occur in this order:
-
VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT
-
VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT
For the transfer pipeline, the following stages occur in this order:
-
VK_PIPELINE_STAGE_TRANSFER_BIT
For host operations, only one pipeline stage occurs, so no order is guaranteed:
-
VK_PIPELINE_STAGE_HOST_BIT
7.1.3. Access Types
Memory in Vulkan can be accessed from within shader invocations and via some fixed-function stages of the pipeline. The access type is a function of the descriptor type used, or how a fixed-function stage accesses memory.
Some synchronization commands take sets of access types as parameters to define the access scopes of a memory dependency. If a synchronization command includes a source access mask, its first access scope only includes accesses via the access types specified in that mask. Similarly, if a synchronization command includes a destination access mask, its second access scope only includes accesses via the access types specified in that mask.
Bits which can be set in the srcAccessMask
and dstAccessMask
members of VkSubpassDependency,
VkMemoryBarrier, VkBufferMemoryBarrier, and
VkImageMemoryBarrier, specifying access behavior, are:
// Provided by VK_VERSION_1_0
typedef enum VkAccessFlagBits {
VK_ACCESS_INDIRECT_COMMAND_READ_BIT = 0x00000001,
VK_ACCESS_INDEX_READ_BIT = 0x00000002,
VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT = 0x00000004,
VK_ACCESS_UNIFORM_READ_BIT = 0x00000008,
VK_ACCESS_INPUT_ATTACHMENT_READ_BIT = 0x00000010,
VK_ACCESS_SHADER_READ_BIT = 0x00000020,
VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT = 0x00000200,
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT = 0x00000400,
VK_ACCESS_TRANSFER_READ_BIT = 0x00000800,
VK_ACCESS_TRANSFER_WRITE_BIT = 0x00001000,
VK_ACCESS_HOST_READ_BIT = 0x00002000,
VK_ACCESS_HOST_WRITE_BIT = 0x00004000,
VK_ACCESS_MEMORY_READ_BIT = 0x00008000,
VK_ACCESS_MEMORY_WRITE_BIT = 0x00010000,
} VkAccessFlagBits;
-
VK_ACCESS_MEMORY_READ_BIT
specifies all read accesses. It is always valid in any access mask, and is treated as equivalent to setting allREAD
access flags that are valid where it is used. -
VK_ACCESS_MEMORY_WRITE_BIT
specifies all write accesses. It is always valid in any access mask, and is treated as equivalent to setting allWRITE
access flags that are valid where it is used. -
VK_ACCESS_INDIRECT_COMMAND_READ_BIT
specifies read access to indirect command data read as part of an indirect drawing or dispatching command. Such access occurs in theVK_PIPELINE_STAGE_DRAW_INDIRECT_BIT
pipeline stage. -
VK_ACCESS_INDEX_READ_BIT
specifies read access to an index buffer as part of an indexed drawing command, bound by vkCmdBindIndexBuffer. Such access occurs in theVK_PIPELINE_STAGE_VERTEX_INPUT_BIT
pipeline stage. -
VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT
specifies read access to a vertex buffer as part of a drawing command, bound by vkCmdBindVertexBuffers. Such access occurs in theVK_PIPELINE_STAGE_VERTEX_INPUT_BIT
pipeline stage. -
VK_ACCESS_UNIFORM_READ_BIT
specifies read access to a uniform buffer in any shader pipeline stage. -
VK_ACCESS_INPUT_ATTACHMENT_READ_BIT
specifies read access to an input attachment within a render pass during fragment shading. Such access occurs in theVK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT
pipeline stage. -
VK_ACCESS_SHADER_READ_BIT
specifies read access to a uniform buffer, uniform texel buffer, sampled image, storage buffer, storage texel buffer, or storage image in any shader pipeline stage. -
VK_ACCESS_SHADER_WRITE_BIT
specifies write access to a storage buffer, storage texel buffer, or storage image in any shader pipeline stage. -
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT
specifies read access to a color attachment, such as via blending, logic operations, or via certain subpass load operations. -
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT
specifies write access to a color or resolve attachment during a render pass or via certain subpass load and store operations. Such access occurs in theVK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
pipeline stage. -
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT
specifies read access to a depth/stencil attachment, via depth or stencil operations or via certain subpass load operations. Such access occurs in theVK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT
orVK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT
pipeline stages. -
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT
specifies write access to a depth/stencil attachment, via depth or stencil operations or via certain subpass load and store operations. Such access occurs in theVK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT
orVK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT
pipeline stages. -
VK_ACCESS_TRANSFER_READ_BIT
specifies read access to an image or buffer in a copy operation. -
VK_ACCESS_TRANSFER_WRITE_BIT
specifies write access to an image or buffer in a clear or copy operation. -
VK_ACCESS_HOST_READ_BIT
specifies read access by a host operation. Accesses of this type are not performed through a resource, but directly on memory. Such access occurs in theVK_PIPELINE_STAGE_HOST_BIT
pipeline stage. -
VK_ACCESS_HOST_WRITE_BIT
specifies write access by a host operation. Accesses of this type are not performed through a resource, but directly on memory. Such access occurs in theVK_PIPELINE_STAGE_HOST_BIT
pipeline stage.
Certain access types are only performed by a subset of pipeline stages. Any synchronization command that takes both stage masks and access masks uses both to define the access scopes - only the specified access types performed by the specified stages are included in the access scope. An application must not specify an access flag in a synchronization command if it does not include a pipeline stage in the corresponding stage mask that is able to perform accesses of that type. The following table lists, for each access flag, which pipeline stages can perform that type of access.
Access flag | Supported pipeline stages |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Any |
|
Any |
// Provided by VK_VERSION_1_0
typedef VkFlags VkAccessFlags;
VkAccessFlags
is a bitmask type for setting a mask of zero or more
VkAccessFlagBits.
If a memory object does not have the
VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
property, then
vkFlushMappedMemoryRanges must be called in order to guarantee that
writes to the memory object from the host are made available to the host
domain, where they can be further made available to the device domain via a
domain operation.
Similarly, vkInvalidateMappedMemoryRanges must be called to guarantee
that writes which are available to the host domain are made visible to host
operations.
If the memory object does have the
VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
property flag, writes to the
memory object from the host are automatically made available to the host
domain.
Similarly, writes made available to the host domain are automatically made
visible to the host.
Note
Queue submission commands automatically perform a domain operation from host to device for all writes performed before the command executes, so in most cases an explicit memory barrier is not needed for this case. In the few circumstances where a submit does not occur between the host write and the device read access, writes can be made available by using an explicit memory barrier. |
7.1.4. Framebuffer Region Dependencies
Pipeline stages that operate on, or with respect to, the framebuffer are collectively the framebuffer-space pipeline stages. These stages are:
-
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT
-
VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT
-
VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT
-
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
For these pipeline stages, an execution or memory dependency from the first set of operations to the second set can either be a single framebuffer-global dependency, or split into multiple framebuffer-local dependencies. A dependency with non-framebuffer-space pipeline stages is neither framebuffer-global nor framebuffer-local.
A framebuffer region is a set of sample (x, y, layer, sample) coordinates that is a subset of the entire framebuffer.
Both synchronization scopes of a framebuffer-local dependency include only the operations performed within corresponding framebuffer regions (as defined below). No ordering guarantees are made between different framebuffer regions for a framebuffer-local dependency.
Both synchronization scopes of a framebuffer-global dependency include operations on all framebuffer-regions.
If the first synchronization scope includes operations on pixels/fragments with N samples and the second synchronization scope includes operations on pixels/fragments with M samples, where N does not equal M, then a framebuffer region containing all samples at a given (x, y, layer) coordinate in the first synchronization scope corresponds to a region containing all samples at the same coordinate in the second synchronization scope. In other words, it is a pixel granularity dependency. If N equals M, then a framebuffer region containing a single (x, y, layer, sample) coordinate in the first synchronization scope corresponds to a region containing the same sample at the same coordinate in the second synchronization scope. In other words, it is a sample granularity dependency.
Note
Since fragment shader invocations are not specified to run in any particular groupings, the size of a framebuffer region is implementation-dependent, not known to the application, and must be assumed to be no larger than specified above. |
Note
Practically, the pixel vs sample granularity dependency means that if an
input attachment has a different number of samples than the pipeline’s
|
If a synchronization command includes a dependencyFlags
parameter, and
specifies the VK_DEPENDENCY_BY_REGION_BIT
flag, then it defines
framebuffer-local dependencies for the framebuffer-space pipeline stages in
that synchronization command, for all framebuffer regions.
If no dependencyFlags
parameter is included, or the
VK_DEPENDENCY_BY_REGION_BIT
flag is not specified, then a
framebuffer-global dependency is specified for those stages.
The VK_DEPENDENCY_BY_REGION_BIT
flag does not affect the dependencies
between non-framebuffer-space pipeline stages, nor does it affect the
dependencies between framebuffer-space and non-framebuffer-space pipeline
stages.
Note
Framebuffer-local dependencies are more efficient for most architectures; particularly tile-based architectures - which can keep framebuffer-regions entirely in on-chip registers and thus avoid external bandwidth across such a dependency. Including a framebuffer-global dependency in your rendering will usually force all implementations to flush data to memory, or to a higher level cache, breaking any potential locality optimizations. |
7.2. Implicit Synchronization Guarantees
A small number of implicit ordering guarantees are provided by Vulkan, ensuring that the order in which commands are submitted is meaningful, and avoiding unnecessary complexity in common operations.
Submission order is a fundamental ordering in Vulkan, giving meaning to the order in which action and synchronization commands are recorded and submitted to a single queue. Explicit and implicit ordering guarantees between commands in Vulkan all work on the premise that this ordering is meaningful. This order does not itself define any execution or memory dependencies; synchronization commands and other orderings within the API use this ordering to define their scopes.
Submission order for any given set of commands is based on the order in which they were recorded to command buffers and then submitted. This order is determined as follows:
-
The initial order is determined by the order in which vkQueueSubmit commands are executed on the host, for a single queue, from first to last.
-
The order in which VkSubmitInfo structures are specified in the
pSubmits
parameter of vkQueueSubmit, from lowest index to highest. -
The order in which command buffers are specified in the
pCommandBuffers
member of VkSubmitInfo from lowest index to highest. -
The order in which commands were recorded to a command buffer on the host, from first to last:
-
For commands recorded outside a render pass, this includes all other commands recorded outside a render pass, including vkCmdBeginRenderPass and vkCmdEndRenderPass commands; it does not directly include commands inside a render pass.
-
For commands recorded inside a render pass, this includes all other commands recorded inside the same subpass, including the vkCmdBeginRenderPass and vkCmdEndRenderPass commands that delimit the same render pass instance; it does not include commands recorded to other subpasses. State commands do not execute any operations on the device, instead they set the state of the command buffer when they execute on the host, in the order that they are recorded. Action commands consume the current state of the command buffer when they are recorded, and will execute state changes on the device as required to match the recorded state.
-
Query commands, the order of primitives passing through the graphics pipeline and image layout transitions as part of an image memory barrier provide additional guarantees based on submission order.
Execution of pipeline stages within a given command also has a loose ordering, dependent only on a single command.
Signal operation order is a fundamental ordering in Vulkan, giving meaning to the order in which semaphore and fence signal operations occur when submitted to a single queue. The signal operation order for queue operations is determined as follows:
-
The initial order is determined by the order in which vkQueueSubmit commands are executed on the host, for a single queue, from first to last.
-
The order in which VkSubmitInfo structures are specified in the
pSubmits
parameter of vkQueueSubmit, from lowest index to highest. -
The fence signal operation defined by the
fence
parameter of a vkQueueSubmit, or vkQueueBindSparse command is ordered after all semaphore signal operations defined by that command.
Semaphore signal operations defined by a single VkSubmitInfo, or VkBindSparseInfo structure are unordered with respect to other semaphore signal operations defined within the same structure.
7.3. Fences
Fences are a synchronization primitive that can be used to insert a dependency from a queue to the host. Fences have two states - signaled and unsignaled. A fence can be signaled as part of the execution of a queue submission command. Fences can be unsignaled on the host with vkResetFences. Fences can be waited on by the host with the vkWaitForFences command, and the current state can be queried with vkGetFenceStatus.
Fences are represented by VkFence
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkFence)
To create a fence, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateFence(
VkDevice device,
const VkFenceCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkFence* pFence);
-
device
is the logical device that creates the fence. -
pCreateInfo
is a pointer to a VkFenceCreateInfo structure containing information about how the fence is to be created. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pFence
is a pointer to a handle in which the resulting fence object is returned.
The VkFenceCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkFenceCreateInfo {
VkStructureType sType;
const void* pNext;
VkFenceCreateFlags flags;
} VkFenceCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is a bitmask of VkFenceCreateFlagBits specifying the initial state and behavior of the fence.
// Provided by VK_VERSION_1_0
typedef enum VkFenceCreateFlagBits {
VK_FENCE_CREATE_SIGNALED_BIT = 0x00000001,
} VkFenceCreateFlagBits;
-
VK_FENCE_CREATE_SIGNALED_BIT
specifies that the fence object is created in the signaled state. Otherwise, it is created in the unsignaled state.
// Provided by VK_VERSION_1_0
typedef VkFlags VkFenceCreateFlags;
VkFenceCreateFlags
is a bitmask type for setting a mask of zero or
more VkFenceCreateFlagBits.
To destroy a fence, call:
// Provided by VK_VERSION_1_0
void vkDestroyFence(
VkDevice device,
VkFence fence,
const VkAllocationCallbacks* pAllocator);
-
device
is the logical device that destroys the fence. -
fence
is the handle of the fence to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
To query the status of a fence from the host, call:
// Provided by VK_VERSION_1_0
VkResult vkGetFenceStatus(
VkDevice device,
VkFence fence);
-
device
is the logical device that owns the fence. -
fence
is the handle of the fence to query.
Upon success, vkGetFenceStatus
returns the status of the fence object,
with the following return codes:
Status | Meaning |
---|---|
|
The fence specified by |
|
The fence specified by |
|
The device has been lost. See Lost Device. |
If a queue submission command is pending execution, then the value returned by this command may immediately be out of date.
If the device has been lost (see Lost Device),
vkGetFenceStatus
may return any of the above status codes.
If the device has been lost and vkGetFenceStatus
is called repeatedly,
it will eventually return either VK_SUCCESS
or
VK_ERROR_DEVICE_LOST
.
To set the state of fences to unsignaled from the host, call:
// Provided by VK_VERSION_1_0
VkResult vkResetFences(
VkDevice device,
uint32_t fenceCount,
const VkFence* pFences);
-
device
is the logical device that owns the fences. -
fenceCount
is the number of fences to reset. -
pFences
is a pointer to an array of fence handles to reset.
When vkResetFences is executed on the host, it defines a fence unsignal operation for each fence, which resets the fence to the unsignaled state.
If any member of pFences
is already in the unsignaled state when
vkResetFences is executed, then vkResetFences has no effect on
that fence.
When a fence is submitted to a queue as part of a queue submission command, it defines a memory dependency on the batches that were submitted as part of that command, and defines a fence signal operation which sets the fence to the signaled state.
The first synchronization scope includes every batch submitted in the same queue submission command. Fence signal operations that are defined by vkQueueSubmit additionally include in the first synchronization scope all commands that occur earlier in submission order. Fence signal operations that are defined by vkQueueSubmit or vkQueueBindSparse additionally include in the first synchronization scope any semaphore and fence signal operations that occur earlier in signal operation order.
The second synchronization scope only includes the fence signal operation.
The first access scope includes all memory access performed by the device.
The second access scope is empty.
To wait for one or more fences to enter the signaled state on the host, call:
// Provided by VK_VERSION_1_0
VkResult vkWaitForFences(
VkDevice device,
uint32_t fenceCount,
const VkFence* pFences,
VkBool32 waitAll,
uint64_t timeout);
-
device
is the logical device that owns the fences. -
fenceCount
is the number of fences to wait on. -
pFences
is a pointer to an array offenceCount
fence handles. -
waitAll
is the condition that must be satisfied to successfully unblock the wait. IfwaitAll
isVK_TRUE
, then the condition is that all fences inpFences
are signaled. Otherwise, the condition is that at least one fence inpFences
is signaled. -
timeout
is the timeout period in units of nanoseconds.timeout
is adjusted to the closest value allowed by the implementation-dependent timeout accuracy, which may be substantially longer than one nanosecond, and may be longer than the requested period.
If the condition is satisfied when vkWaitForFences
is called, then
vkWaitForFences
returns immediately.
If the condition is not satisfied at the time vkWaitForFences
is
called, then vkWaitForFences
will block and wait until the condition
is satisfied or the timeout
has expired, whichever is sooner.
If timeout
is zero, then vkWaitForFences
does not wait, but
simply returns the current state of the fences.
VK_TIMEOUT
will be returned in this case if the condition is not
satisfied, even though no actual wait was performed.
If the condition is satisfied before the timeout
has expired,
vkWaitForFences
returns VK_SUCCESS
.
Otherwise, vkWaitForFences
returns VK_TIMEOUT
after the
timeout
has expired.
If device loss occurs (see Lost Device) before
the timeout has expired, vkWaitForFences
must return in finite time
with either VK_SUCCESS
or VK_ERROR_DEVICE_LOST
.
Note
While we guarantee that |
An execution dependency is defined by waiting for a fence to become signaled, either via vkWaitForFences or by polling on vkGetFenceStatus.
The first synchronization scope includes only the fence signal operation.
The second synchronization scope includes the host operations of vkWaitForFences or vkGetFenceStatus indicating that the fence has become signaled.
Note
Signaling a fence and waiting on the host does not guarantee that the results of memory accesses will be visible to the host, as the access scope of a memory dependency defined by a fence only includes device access. A memory barrier or other memory dependency must be used to guarantee this. See the description of host access types for more information. |
7.4. Semaphores
Semaphores are a synchronization primitive that can be used to insert a dependency between queue operations. Semaphores have two states - signaled and unsignaled. A semaphore can be signaled after execution of a queue operation is completed, and a queue operation can wait for a semaphore to become signaled before it begins execution.
Semaphores are represented by VkSemaphore
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSemaphore)
To create a semaphore, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateSemaphore(
VkDevice device,
const VkSemaphoreCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkSemaphore* pSemaphore);
-
device
is the logical device that creates the semaphore. -
pCreateInfo
is a pointer to a VkSemaphoreCreateInfo structure containing information about how the semaphore is to be created. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pSemaphore
is a pointer to a handle in which the resulting semaphore object is returned.
This command creates a binary semaphore that has a boolean payload indicating whether the semaphore is currently signaled or unsignaled. When created, the semaphore is in the unsignaled state.
The VkSemaphoreCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkSemaphoreCreateInfo {
VkStructureType sType;
const void* pNext;
VkSemaphoreCreateFlags flags;
} VkSemaphoreCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is reserved for future use.
// Provided by VK_VERSION_1_0
typedef VkFlags VkSemaphoreCreateFlags;
VkSemaphoreCreateFlags
is a bitmask type for setting a mask, but is
currently reserved for future use.
To destroy a semaphore, call:
// Provided by VK_VERSION_1_0
void vkDestroySemaphore(
VkDevice device,
VkSemaphore semaphore,
const VkAllocationCallbacks* pAllocator);
-
device
is the logical device that destroys the semaphore. -
semaphore
is the handle of the semaphore to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
7.4.1. Semaphore Signaling
When a batch is submitted to a queue via a queue submission, and it includes semaphores to be signaled, it defines a memory dependency on the batch, and defines semaphore signal operations which set the semaphores to the signaled state.
The first synchronization scope includes every command submitted in the same batch. Semaphore signal operations that are defined by vkQueueSubmit additionally include all commands that occur earlier in submission order. Semaphore signal operations that are defined by vkQueueSubmit or vkQueueBindSparse additionally include in the first synchronization scope any semaphore and fence signal operations that occur earlier in signal operation order.
The second synchronization scope includes only the semaphore signal operation.
The first access scope includes all memory access performed by the device.
The second access scope is empty.
7.4.2. Semaphore Waiting
When a batch is submitted to a queue via a queue submission, and it includes semaphores to be waited on, it defines a memory dependency between prior semaphore signal operations and the batch, and defines semaphore wait operations.
Such semaphore wait operations set the semaphores to the unsignaled state.
The first synchronization scope includes all semaphore signal operations that operate on semaphores waited on in the same batch, and that happen-before the wait completes.
The second synchronization scope
includes every command submitted in the same batch.
In the case of vkQueueSubmit, the second synchronization scope is
limited to operations on the pipeline stages determined by the
destination stage mask specified
by the corresponding element of pWaitDstStageMask
.
Also, in the case of
vkQueueSubmit, the second synchronization scope additionally includes
all commands that occur later in
submission order.
The first access scope is empty.
The second access scope includes all memory access performed by the device.
The semaphore wait operation happens-after the first set of operations in the execution dependency, and happens-before the second set of operations in the execution dependency.
Note
Unlike fences or events, the act of waiting for a binary semaphore also unsignals that semaphore. Applications must ensure that between two such wait operations, the semaphore is signaled again, with execution dependencies used to ensure these occur in order. Binary semaphore waits and signals should thus occur in discrete 1:1 pairs. |
7.4.3. Semaphore State Requirements For Wait Operations
Before waiting on a semaphore, the application must ensure the semaphore is in a valid state for a wait operation. Specifically, when a semaphore wait operation is submitted to a queue:
-
A binary semaphore must be signaled, or have an associated semaphore signal operation that is pending execution.
-
Any semaphore signal operations on which the pending binary semaphore signal operation depends must also be completed or pending execution.
-
There must be no other queue waiting on the same binary semaphore when the operation executes.
7.5. Events
Events are a synchronization primitive that can be used to insert a fine-grained dependency between commands submitted to the same queue, or between the host and a queue. Events must not be used to insert a dependency between commands submitted to different queues. Events have two states - signaled and unsignaled. An application can signal or unsignal an event either on the host or on the device. A device can be made to wait for an event to become signaled before executing further operations. No command exists to wait for an event to become signaled on the host, but the current state of an event can be queried.
Events are represented by VkEvent
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkEvent)
To create an event, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateEvent(
VkDevice device,
const VkEventCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkEvent* pEvent);
-
device
is the logical device that creates the event. -
pCreateInfo
is a pointer to a VkEventCreateInfo structure containing information about how the event is to be created. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pEvent
is a pointer to a handle in which the resulting event object is returned.
When created, the event object is in the unsignaled state.
The VkEventCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkEventCreateInfo {
VkStructureType sType;
const void* pNext;
VkEventCreateFlags flags;
} VkEventCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is a bitmask of VkEventCreateFlagBits defining additional creation parameters.
// Provided by VK_VERSION_1_0
typedef enum VkEventCreateFlagBits {
} VkEventCreateFlagBits;
All values for this enum are defined by extensions.
// Provided by VK_VERSION_1_0
typedef VkFlags VkEventCreateFlags;
VkEventCreateFlags
is a bitmask type for setting a mask of
VkEventCreateFlagBits.
To destroy an event, call:
// Provided by VK_VERSION_1_0
void vkDestroyEvent(
VkDevice device,
VkEvent event,
const VkAllocationCallbacks* pAllocator);
-
device
is the logical device that destroys the event. -
event
is the handle of the event to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
To query the state of an event from the host, call:
// Provided by VK_VERSION_1_0
VkResult vkGetEventStatus(
VkDevice device,
VkEvent event);
-
device
is the logical device that owns the event. -
event
is the handle of the event to query.
Upon success, vkGetEventStatus
returns the state of the event object
with the following return codes:
Status | Meaning |
---|---|
|
The event specified by |
|
The event specified by |
If a vkCmdSetEvent
or vkCmdResetEvent
command is in a command
buffer that is in the pending state, then the
value returned by this command may immediately be out of date.
The state of an event can be updated by the host.
The state of the event is immediately changed, and subsequent calls to
vkGetEventStatus
will return the new state.
If an event is already in the requested state, then updating it to the same
state has no effect.
To set the state of an event to signaled from the host, call:
// Provided by VK_VERSION_1_0
VkResult vkSetEvent(
VkDevice device,
VkEvent event);
-
device
is the logical device that owns the event. -
event
is the event to set.
When vkSetEvent is executed on the host, it defines an event signal operation which sets the event to the signaled state.
If event
is already in the signaled state when vkSetEvent is
executed, then vkSetEvent has no effect, and no event signal operation
occurs.
To set the state of an event to unsignaled from the host, call:
// Provided by VK_VERSION_1_0
VkResult vkResetEvent(
VkDevice device,
VkEvent event);
-
device
is the logical device that owns the event. -
event
is the event to reset.
When vkResetEvent is executed on the host, it defines an event unsignal operation which resets the event to the unsignaled state.
If event
is already in the unsignaled state when vkResetEvent is
executed, then vkResetEvent has no effect, and no event unsignal
operation occurs.
The state of an event can also be updated on the device by commands inserted in command buffers.
To set the state of an event to signaled from a device, call:
// Provided by VK_VERSION_1_0
void vkCmdSetEvent(
VkCommandBuffer commandBuffer,
VkEvent event,
VkPipelineStageFlags stageMask);
-
commandBuffer
is the command buffer into which the command is recorded. -
event
is the event that will be signaled. -
stageMask
specifies the source stage mask used to determine the first synchronization scope.
When vkCmdSetEvent is submitted to a queue, it defines an execution dependency on commands that were submitted before it, and defines an event signal operation which sets the event to the signaled state.
The first synchronization scope
includes all commands that occur earlier in
submission order.
The synchronization scope is limited to operations on the pipeline stages
determined by the source stage
mask specified by stageMask
.
The second synchronization scope includes only the event signal operation.
If event
is already in the signaled state when vkCmdSetEvent is
executed on the device, then vkCmdSetEvent has no effect, no event
signal operation occurs, and no execution dependency is generated.
To set the state of an event to unsignaled from a device, call:
// Provided by VK_VERSION_1_0
void vkCmdResetEvent(
VkCommandBuffer commandBuffer,
VkEvent event,
VkPipelineStageFlags stageMask);
-
commandBuffer
is the command buffer into which the command is recorded. -
event
is the event that will be unsignaled. -
stageMask
is a bitmask of VkPipelineStageFlagBits specifying the source stage mask used to determine when theevent
is unsignaled.
When vkCmdResetEvent is submitted to a queue, it defines an execution dependency on commands that were submitted before it, and defines an event unsignal operation which resets the event to the unsignaled state.
The first synchronization scope
includes all commands that occur earlier in
submission order.
The synchronization scope is limited to operations on the pipeline stages
determined by the source stage
mask specified by stageMask
.
The second synchronization scope includes only the event unsignal operation.
If event
is already in the unsignaled state when vkCmdResetEvent
is executed on the device, then vkCmdResetEvent has no effect, no
event unsignal operation occurs, and no execution dependency is generated.
To wait for one or more events to enter the signaled state on a device, call:
// Provided by VK_VERSION_1_0
void vkCmdWaitEvents(
VkCommandBuffer commandBuffer,
uint32_t eventCount,
const VkEvent* pEvents,
VkPipelineStageFlags srcStageMask,
VkPipelineStageFlags dstStageMask,
uint32_t memoryBarrierCount,
const VkMemoryBarrier* pMemoryBarriers,
uint32_t bufferMemoryBarrierCount,
const VkBufferMemoryBarrier* pBufferMemoryBarriers,
uint32_t imageMemoryBarrierCount,
const VkImageMemoryBarrier* pImageMemoryBarriers);
-
commandBuffer
is the command buffer into which the command is recorded. -
eventCount
is the length of thepEvents
array. -
pEvents
is a pointer to an array of event object handles to wait on. -
srcStageMask
is a bitmask of VkPipelineStageFlagBits specifying the source stage mask. -
dstStageMask
is a bitmask of VkPipelineStageFlagBits specifying the destination stage mask. -
memoryBarrierCount
is the length of thepMemoryBarriers
array. -
pMemoryBarriers
is a pointer to an array of VkMemoryBarrier structures. -
bufferMemoryBarrierCount
is the length of thepBufferMemoryBarriers
array. -
pBufferMemoryBarriers
is a pointer to an array of VkBufferMemoryBarrier structures. -
imageMemoryBarrierCount
is the length of thepImageMemoryBarriers
array. -
pImageMemoryBarriers
is a pointer to an array of VkImageMemoryBarrier structures.
When vkCmdWaitEvents
is submitted to a queue, it defines a memory
dependency between prior event signal operations on the same queue or the
host, and subsequent commands.
vkCmdWaitEvents
must not be used to wait on event signal operations
occurring on other queues.
The first synchronization scope only includes event signal operations that
operate on members of pEvents
, and the operations that happened-before
the event signal operations.
Event signal operations performed by vkCmdSetEvent that occur earlier
in submission order are included in the
first synchronization scope, if the logically latest pipeline stage in their stageMask
parameter is
logically earlier than or equal
to the logically latest pipeline
stage in srcStageMask
.
Event signal operations performed by vkSetEvent are only included in
the first synchronization scope if VK_PIPELINE_STAGE_HOST_BIT
is
included in srcStageMask
.
The second synchronization scope
includes all commands that occur later in
submission order.
The second synchronization scope is limited to operations on the pipeline
stages determined by the destination stage mask specified by dstStageMask
.
The first access scope is
limited to accesses in the pipeline stages determined by the
source stage mask specified by
srcStageMask
.
Within that, the first access scope only includes the first access scopes
defined by elements of the pMemoryBarriers
,
pBufferMemoryBarriers
and pImageMemoryBarriers
arrays, which
each define a set of memory barriers.
If no memory barriers are specified, then the first access scope includes no
accesses.
The second access scope is
limited to accesses in the pipeline stages determined by the
destination stage mask specified
by dstStageMask
.
Within that, the second access scope only includes the second access scopes
defined by elements of the pMemoryBarriers
,
pBufferMemoryBarriers
and pImageMemoryBarriers
arrays, which
each define a set of memory barriers.
If no memory barriers are specified, then the second access scope includes
no accesses.
Note
vkCmdWaitEvents is used with vkCmdSetEvent to define a memory dependency between two sets of action commands, roughly in the same way as pipeline barriers, but split into two commands such that work between the two may execute unhindered. Unlike vkCmdPipelineBarrier, a queue family ownership transfer cannot be performed using vkCmdWaitEvents. |
Note
Applications should be careful to avoid race conditions when using events.
There is no direct ordering guarantee between vkCmdWaitEvents and
vkCmdResetEvent, or vkCmdSetEvent.
Another execution dependency (e.g. a pipeline barrier or semaphore with
|
7.6. Pipeline Barriers
To record a pipeline barrier, call:
// Provided by VK_VERSION_1_0
void vkCmdPipelineBarrier(
VkCommandBuffer commandBuffer,
VkPipelineStageFlags srcStageMask,
VkPipelineStageFlags dstStageMask,
VkDependencyFlags dependencyFlags,
uint32_t memoryBarrierCount,
const VkMemoryBarrier* pMemoryBarriers,
uint32_t bufferMemoryBarrierCount,
const VkBufferMemoryBarrier* pBufferMemoryBarriers,
uint32_t imageMemoryBarrierCount,
const VkImageMemoryBarrier* pImageMemoryBarriers);
-
commandBuffer
is the command buffer into which the command is recorded. -
srcStageMask
is a bitmask of VkPipelineStageFlagBits specifying the source stages. -
dstStageMask
is a bitmask of VkPipelineStageFlagBits specifying the destination stages. -
dependencyFlags
is a bitmask of VkDependencyFlagBits specifying how execution and memory dependencies are formed. -
memoryBarrierCount
is the length of thepMemoryBarriers
array. -
pMemoryBarriers
is a pointer to an array of VkMemoryBarrier structures. -
bufferMemoryBarrierCount
is the length of thepBufferMemoryBarriers
array. -
pBufferMemoryBarriers
is a pointer to an array of VkBufferMemoryBarrier structures. -
imageMemoryBarrierCount
is the length of thepImageMemoryBarriers
array. -
pImageMemoryBarriers
is a pointer to an array of VkImageMemoryBarrier structures.
When vkCmdPipelineBarrier is submitted to a queue, it defines a memory dependency between commands that were submitted before it, and those submitted after it.
If vkCmdPipelineBarrier was recorded outside a render pass instance,
the first synchronization scope
includes all commands that occur earlier in
submission order.
If vkCmdPipelineBarrier was recorded inside a render pass instance,
the first synchronization scope includes only commands that occur earlier in
submission order within the same
subpass.
In either case, the first synchronization scope is limited to operations on
the pipeline stages determined by the
source stage mask specified by
srcStageMask
.
If vkCmdPipelineBarrier was recorded outside a render pass instance,
the second synchronization scope
includes all commands that occur later in
submission order.
If vkCmdPipelineBarrier was recorded inside a render pass instance,
the second synchronization scope includes only commands that occur later in
submission order within the same
subpass.
In either case, the second synchronization scope is limited to operations on
the pipeline stages determined by the
destination stage mask specified
by dstStageMask
.
The first access scope is
limited to accesses in the pipeline stages determined by the
source stage mask specified by
srcStageMask
.
Within that, the first access scope only includes the first access scopes
defined by elements of the pMemoryBarriers
,
pBufferMemoryBarriers
and pImageMemoryBarriers
arrays, which
each define a set of memory barriers.
If no memory barriers are specified, then the first access scope includes no
accesses.
The second access scope is
limited to accesses in the pipeline stages determined by the
destination stage mask specified
by dstStageMask
.
Within that, the second access scope only includes the second access scopes
defined by elements of the pMemoryBarriers
,
pBufferMemoryBarriers
and pImageMemoryBarriers
arrays, which
each define a set of memory barriers.
If no memory barriers are specified, then the second access scope includes
no accesses.
If dependencyFlags
includes VK_DEPENDENCY_BY_REGION_BIT
, then
any dependency between framebuffer-space pipeline stages is
framebuffer-local - otherwise it is
framebuffer-global.
Bits which can be set in vkCmdPipelineBarrier
::dependencyFlags
,
specifying how execution and memory dependencies are formed, are:
// Provided by VK_VERSION_1_0
typedef enum VkDependencyFlagBits {
VK_DEPENDENCY_BY_REGION_BIT = 0x00000001,
} VkDependencyFlagBits;
-
VK_DEPENDENCY_BY_REGION_BIT
specifies that dependencies will be framebuffer-local.
// Provided by VK_VERSION_1_0
typedef VkFlags VkDependencyFlags;
VkDependencyFlags
is a bitmask type for setting a mask of zero or more
VkDependencyFlagBits.
7.6.1. Subpass Self-dependency
If vkCmdPipelineBarrier
is called inside a render pass instance, the following restrictions apply.
For a given subpass to allow a pipeline barrier, the render pass must
declare a self-dependency from that subpass to itself.
That is, there must exist a subpass dependency with srcSubpass
and
dstSubpass
both equal to that subpass index.
More than one self-dependency can be declared for each subpass.
Self-dependencies must only include pipeline stage bits that are graphics
stages.
If any of the stages in srcStageMask
are
framebuffer-space stages,
dstStageMask
must only contain
framebuffer-space stages.
This means that pseudo-stages like VK_PIPELINE_STAGE_ALL_COMMANDS_BIT
which include the execution of both framebuffer-space stages and
non-framebuffer-space stages must not be used.
If the source and destination stage masks both include framebuffer-space
stages, then dependencyFlags
must include
VK_DEPENDENCY_BY_REGION_BIT
.
Each of the synchronization scopes and access scopes of a vkCmdPipelineBarrier command inside a render pass instance must be a subset of the scopes of one of the self-dependencies for the current subpass.
If the self-dependency has VK_DEPENDENCY_BY_REGION_BIT
set, then so must the pipeline barrier.
Pipeline barriers within a render pass instance must not include buffer
memory barriers.
Image memory barriers must only specify image subresources that are used as
attachments within the subpass, and must not define an
image layout transition or
queue family ownership transfer.
7.7. Memory Barriers
Memory barriers are used to explicitly control access to buffer and image subresource ranges. Memory barriers are used to transfer ownership between queue families, change image layouts, and define availability and visibility operations. They explicitly define the access types and buffer and image subresource ranges that are included in the access scopes of a memory dependency that is created by a synchronization command that includes them.
7.7.1. Global Memory Barriers
Global memory barriers apply to memory accesses involving all memory objects that exist at the time of its execution.
The VkMemoryBarrier
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkMemoryBarrier {
VkStructureType sType;
const void* pNext;
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
} VkMemoryBarrier;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
srcAccessMask
is a bitmask of VkAccessFlagBits specifying a source access mask. -
dstAccessMask
is a bitmask of VkAccessFlagBits specifying a destination access mask.
The first access scope is
limited to access types in the source access
mask specified by srcAccessMask
.
The second access scope is
limited to access types in the destination
access mask specified by dstAccessMask
.
7.7.2. Buffer Memory Barriers
Buffer memory barriers only apply to memory accesses involving a specific buffer range. That is, a memory dependency formed from a buffer memory barrier is scoped to access via the specified buffer range. Buffer memory barriers can also be used to define a queue family ownership transfer for the specified buffer range.
The VkBufferMemoryBarrier
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkBufferMemoryBarrier {
VkStructureType sType;
const void* pNext;
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
uint32_t srcQueueFamilyIndex;
uint32_t dstQueueFamilyIndex;
VkBuffer buffer;
VkDeviceSize offset;
VkDeviceSize size;
} VkBufferMemoryBarrier;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
srcAccessMask
is a bitmask of VkAccessFlagBits specifying a source access mask. -
dstAccessMask
is a bitmask of VkAccessFlagBits specifying a destination access mask. -
srcQueueFamilyIndex
is the source queue family for a queue family ownership transfer. -
dstQueueFamilyIndex
is the destination queue family for a queue family ownership transfer. -
buffer
is a handle to the buffer whose backing memory is affected by the barrier. -
offset
is an offset in bytes into the backing memory forbuffer
; this is relative to the base offset as bound to the buffer (see vkBindBufferMemory). -
size
is a size in bytes of the affected area of backing memory forbuffer
, orVK_WHOLE_SIZE
to use the range fromoffset
to the end of the buffer.
The first access scope is
limited to access to memory through the specified buffer range, via access
types in the source access mask specified
by srcAccessMask
.
If srcAccessMask
includes VK_ACCESS_HOST_WRITE_BIT
, memory
writes performed by that access type are also made visible, as that access
type is not performed through a resource.
The second access scope is
limited to access to memory through the specified buffer range, via access
types in the destination access mask
specified by dstAccessMask
.
If dstAccessMask
includes VK_ACCESS_HOST_WRITE_BIT
or
VK_ACCESS_HOST_READ_BIT
, available memory writes are also made visible
to accesses of those types, as those access types are not performed through
a resource.
If srcQueueFamilyIndex
is not equal to dstQueueFamilyIndex
, and
srcQueueFamilyIndex
is equal to the current queue family, then the
memory barrier defines a queue
family release operation for the specified buffer range, and the second
access scope includes no access, as if dstAccessMask
was 0
.
If dstQueueFamilyIndex
is not equal to srcQueueFamilyIndex
, and
dstQueueFamilyIndex
is equal to the current queue family, then the
memory barrier defines a queue
family acquire operation for the specified buffer range, and the first
access scope includes no access, as if srcAccessMask
was 0
.
VK_WHOLE_SIZE
is a special value indicating that the entire remaining
length of a buffer following a given offset
should be used.
It can be specified for VkBufferMemoryBarrier::size
and other
structures.
#define VK_WHOLE_SIZE (~0ULL)
7.7.3. Image Memory Barriers
Image memory barriers only apply to memory accesses involving a specific image subresource range. That is, a memory dependency formed from an image memory barrier is scoped to access via the specified image subresource range. Image memory barriers can also be used to define image layout transitions or a queue family ownership transfer for the specified image subresource range.
The VkImageMemoryBarrier
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkImageMemoryBarrier {
VkStructureType sType;
const void* pNext;
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
VkImageLayout oldLayout;
VkImageLayout newLayout;
uint32_t srcQueueFamilyIndex;
uint32_t dstQueueFamilyIndex;
VkImage image;
VkImageSubresourceRange subresourceRange;
} VkImageMemoryBarrier;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
srcAccessMask
is a bitmask of VkAccessFlagBits specifying a source access mask. -
dstAccessMask
is a bitmask of VkAccessFlagBits specifying a destination access mask. -
oldLayout
is the old layout in an image layout transition. -
newLayout
is the new layout in an image layout transition. -
srcQueueFamilyIndex
is the source queue family for a queue family ownership transfer. -
dstQueueFamilyIndex
is the destination queue family for a queue family ownership transfer. -
image
is a handle to the image affected by this barrier. -
subresourceRange
describes the image subresource range withinimage
that is affected by this barrier.
The first access scope is
limited to access to memory through the specified image subresource range,
via access types in the source access mask
specified by srcAccessMask
.
If srcAccessMask
includes VK_ACCESS_HOST_WRITE_BIT
, memory
writes performed by that access type are also made visible, as that access
type is not performed through a resource.
The second access scope is
limited to access to memory through the specified image subresource range,
via access types in the destination access
mask specified by dstAccessMask
.
If dstAccessMask
includes VK_ACCESS_HOST_WRITE_BIT
or
VK_ACCESS_HOST_READ_BIT
, available memory writes are also made visible
to accesses of those types, as those access types are not performed through
a resource.
If srcQueueFamilyIndex
is not equal to dstQueueFamilyIndex
, and
srcQueueFamilyIndex
is equal to the current queue family, then the
memory barrier defines a queue
family release operation for the specified image subresource range, and
the second access scope includes no access, as if dstAccessMask
was
0
.
If dstQueueFamilyIndex
is not equal to srcQueueFamilyIndex
, and
dstQueueFamilyIndex
is equal to the current queue family, then the
memory barrier defines a queue
family acquire operation for the specified image subresource range, and
the first access scope includes no access, as if srcAccessMask
was
0
.
oldLayout
and newLayout
define an
image layout transition for
the specified image subresource range.
7.7.4. Queue Family Ownership Transfer
Resources created with a VkSharingMode of
VK_SHARING_MODE_EXCLUSIVE
must have their ownership explicitly
transferred from one queue family to another in order to access their
content in a well-defined manner on a queue in a different queue family.
The special queue family index VK_QUEUE_FAMILY_IGNORED
indicates that
a queue family parameter or member is ignored.
#define VK_QUEUE_FAMILY_IGNORED (~0U)
If memory dependencies are correctly expressed between uses of such a resource between two queues in different families, but no ownership transfer is defined, the contents of that resource are undefined for any read accesses performed by the second queue family.
Note
If an application does not need the contents of a resource to remain valid when transferring from one queue family to another, then the ownership transfer should be skipped. |
A queue family ownership transfer consists of two distinct parts:
-
Release exclusive ownership from the source queue family
-
Acquire exclusive ownership for the destination queue family
An application must ensure that these operations occur in the correct order by defining an execution dependency between them, e.g. using a semaphore.
A release operation is used to
release exclusive ownership of a range of a buffer or image subresource
range.
A release operation is defined by executing a
buffer memory barrier (for a
buffer range) or an image memory
barrier (for an image subresource range) using a pipeline barrier command,
on a queue from the source queue family.
The srcQueueFamilyIndex
parameter of the barrier must be set to the
source queue family index, and the dstQueueFamilyIndex
parameter to
the destination queue family index.
dstAccessMask
is ignored for such a barrier, such that no visibility
operation is executed - the value of this mask does not affect the validity
of the barrier.
The release operation happens-after the availability operation, and
happens-before operations specified in the second synchronization scope of
the calling command.
An acquire operation is used
to acquire exclusive ownership of a range of a buffer or image subresource
range.
An acquire operation is defined by executing a
buffer memory barrier (for a
buffer range) or an image memory
barrier (for an image subresource range) using a pipeline barrier command,
on a queue from the destination queue family.
The buffer range or image subresource range specified in an acquire
operation must match exactly that of a previous release operation.
The srcQueueFamilyIndex
parameter of the barrier must be set to the
source queue family index, and the dstQueueFamilyIndex
parameter to
the destination queue family index.
srcAccessMask
is ignored for such a barrier, such that no availability
operation is executed - the value of this mask does not affect the validity
of the barrier.
The acquire operation happens-after operations in the first synchronization
scope of the calling command, and happens-before the visibility operation.
Note
Whilst it is not invalid to provide destination or source access masks for memory barriers used for release or acquire operations, respectively, they have no practical effect. Access after a release operation has undefined results, and so visibility for those accesses has no practical effect. Similarly, write access before an acquire operation will produce undefined results for future access, so availability of those writes has no practical use. In an earlier version of the specification, these were required to match on both sides - but this was subsequently relaxed. These masks should be set to 0. |
If the transfer is via an image memory barrier, and an
image layout transition is
desired, then the values of oldLayout
and newLayout
in the
release operation's memory barrier must be equal to values of
oldLayout
and newLayout
in the acquire operation's memory
barrier.
Although the image layout transition is submitted twice, it will only be
executed once.
A layout transition specified in this way happens-after the release
operation and happens-before the acquire operation.
If the values of srcQueueFamilyIndex
and dstQueueFamilyIndex
are
equal, no ownership transfer is performed, and the barrier operates as if
they were both set to VK_QUEUE_FAMILY_IGNORED
.
Queue family ownership transfers may perform read and write accesses on all memory bound to the image subresource or buffer range, so applications must ensure that all memory writes have been made available before a queue family ownership transfer is executed. Available memory is automatically made visible to queue family release and acquire operations, and writes performed by those operations are automatically made available.
Once a queue family has acquired ownership of a buffer range or image
subresource range of a VK_SHARING_MODE_EXCLUSIVE
resource, its
contents are undefined to other queue families unless ownership is
transferred.
The contents of any portion of another resource which aliases memory that is
bound to the transferred buffer or image subresource range are undefined
after a release or acquire operation.
Note
Because events cannot be used directly for inter-queue synchronization, and because vkCmdSetEvent does not have the queue family index or memory barrier parameters needed by a release operation, the release and acquire operations of a queue family ownership transfer can only be performed using vkCmdPipelineBarrier. |
7.8. Wait Idle Operations
To wait on the host for the completion of outstanding queue operations for a given queue, call:
// Provided by VK_VERSION_1_0
VkResult vkQueueWaitIdle(
VkQueue queue);
-
queue
is the queue on which to wait.
vkQueueWaitIdle
is equivalent to having submitted a valid fence to
every previously executed queue submission
command that accepts a fence, then waiting for all of those fences to
signal using vkWaitForFences with an infinite timeout and
waitAll
set to VK_TRUE
.
To wait on the host for the completion of outstanding queue operations for all queues on a given logical device, call:
// Provided by VK_VERSION_1_0
VkResult vkDeviceWaitIdle(
VkDevice device);
-
device
is the logical device to idle.
vkDeviceWaitIdle
is equivalent to calling vkQueueWaitIdle
for
all queues owned by device
.
7.9. Host Write Ordering Guarantees
When batches of command buffers are submitted to a queue via a queue submission command, it defines a memory dependency with prior host operations, and execution of command buffers submitted to the queue.
The first synchronization scope is defined by the host execution model, but includes execution of vkQueueSubmit on the host and anything that happened-before it.
The second synchronization scope includes all commands submitted in the same queue submission, and all commands that occur later in submission order.
The first access scope includes all host writes to mappable device memory that are available to the host memory domain.
The second access scope includes all memory access performed by the device.
8. Render Pass
Draw commands must be recorded within a render pass instance. Each render pass instance defines a set of image resources, referred to as attachments, used during rendering.
A render pass object represents a collection of attachments, subpasses, and dependencies between the subpasses, and describes how the attachments are used over the course of the subpasses.
Render passes are represented by VkRenderPass
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkRenderPass)
An attachment description describes the properties of an attachment including its format, sample count, and how its contents are treated at the beginning and end of each render pass instance.
A subpass represents a phase of rendering that reads and writes a subset of the attachments in a render pass. Rendering commands are recorded into a particular subpass of a render pass instance.
A subpass description describes the subset of attachments that is involved in the execution of a subpass. Each subpass can read from some attachments as input attachments, write to some as color attachments or depth/stencil attachments, and perform multisample resolve operations to resolve attachments. A subpass description can also include a set of preserve attachments, which are attachments that are not read or written by the subpass but whose contents must be preserved throughout the subpass.
A subpass uses an attachment if the attachment is a color, depth/stencil,
resolve,
or input attachment for that subpass (as determined by the
pColorAttachments
, pDepthStencilAttachment
,
pResolveAttachments
,
and pInputAttachments
members of VkSubpassDescription,
respectively).
A subpass does not use an attachment if that attachment is preserved by the
subpass.
The first use of an attachment is in the lowest numbered subpass that uses
that attachment.
Similarly, the last use of an attachment is in the highest numbered
subpass that uses that attachment.
The subpasses in a render pass all render to the same dimensions, and fragments for pixel (x,y,layer) in one subpass can only read attachment contents written by previous subpasses at that same (x,y,layer) location.
Note
By describing a complete set of subpasses in advance, render passes provide the implementation an opportunity to optimize the storage and transfer of attachment data between subpasses. In practice, this means that subpasses with a simple framebuffer-space dependency may be merged into a single tiled rendering pass, keeping the attachment data on-chip for the duration of a render pass instance. However, it is also quite common for a render pass to only contain a single subpass. |
Subpass dependencies describe execution and memory dependencies between subpasses.
A subpass dependency chain is a sequence of subpass dependencies in a render pass, where the source subpass of each subpass dependency (after the first) equals the destination subpass of the previous dependency.
Execution of subpasses may overlap or execute out of order with regards to other subpasses, unless otherwise enforced by an execution dependency. Each subpass only respects submission order for commands recorded in the same subpass, and the vkCmdBeginRenderPass and vkCmdEndRenderPass commands that delimit the render pass - commands within other subpasses are not included. This affects most other implicit ordering guarantees.
A render pass describes the structure of subpasses and attachments
independent of any specific image views for the attachments.
The specific image views that will be used for the attachments, and their
dimensions, are specified in VkFramebuffer
objects.
Framebuffers are created with respect to a specific render pass that the
framebuffer is compatible with (see Render Pass
Compatibility).
Collectively, a render pass and a framebuffer define the complete render
target state for one or more subpasses as well as the algorithmic
dependencies between the subpasses.
The various pipeline stages of the drawing commands for a given subpass may execute concurrently and/or out of order, both within and across drawing commands, whilst still respecting pipeline order. However for a given (x,y,layer,sample) sample location, certain per-sample operations are performed in rasterization order.
VK_ATTACHMENT_UNUSED
is a constant indicating that a render pass
attachment is not used.
#define VK_ATTACHMENT_UNUSED (~0U)
8.1. Render Pass Creation
To create a render pass, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateRenderPass(
VkDevice device,
const VkRenderPassCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkRenderPass* pRenderPass);
-
device
is the logical device that creates the render pass. -
pCreateInfo
is a pointer to a VkRenderPassCreateInfo structure describing the parameters of the render pass. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pRenderPass
is a pointer to a VkRenderPass handle in which the resulting render pass object is returned.
The VkRenderPassCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkRenderPassCreateInfo {
VkStructureType sType;
const void* pNext;
VkRenderPassCreateFlags flags;
uint32_t attachmentCount;
const VkAttachmentDescription* pAttachments;
uint32_t subpassCount;
const VkSubpassDescription* pSubpasses;
uint32_t dependencyCount;
const VkSubpassDependency* pDependencies;
} VkRenderPassCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is reserved for future use. -
attachmentCount
is the number of attachments used by this render pass. -
pAttachments
is a pointer to an array ofattachmentCount
VkAttachmentDescription structures describing the attachments used by the render pass. -
subpassCount
is the number of subpasses to create. -
pSubpasses
is a pointer to an array ofsubpassCount
VkSubpassDescription structures describing each subpass. -
dependencyCount
is the number of memory dependencies between pairs of subpasses. -
pDependencies
is a pointer to an array ofdependencyCount
VkSubpassDependency structures describing dependencies between pairs of subpasses.
Note
Care should be taken to avoid a data race here; if any subpasses access attachments with overlapping memory locations, and one of those accesses is a write, a subpass dependency needs to be included between them. |
// Provided by VK_VERSION_1_0
typedef VkFlags VkRenderPassCreateFlags;
VkRenderPassCreateFlags
is a bitmask type for setting a mask of zero
or more VkRenderPassCreateFlagBits.
The VkAttachmentDescription
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkAttachmentDescription {
VkAttachmentDescriptionFlags flags;
VkFormat format;
VkSampleCountFlagBits samples;
VkAttachmentLoadOp loadOp;
VkAttachmentStoreOp storeOp;
VkAttachmentLoadOp stencilLoadOp;
VkAttachmentStoreOp stencilStoreOp;
VkImageLayout initialLayout;
VkImageLayout finalLayout;
} VkAttachmentDescription;
-
flags
is a bitmask of VkAttachmentDescriptionFlagBits specifying additional properties of the attachment. -
format
is a VkFormat value specifying the format of the image view that will be used for the attachment. -
samples
is a VkSampleCountFlagBits value specifying the number of samples of the image. -
loadOp
is a VkAttachmentLoadOp value specifying how the contents of color and depth components of the attachment are treated at the beginning of the subpass where it is first used. -
storeOp
is a VkAttachmentStoreOp value specifying how the contents of color and depth components of the attachment are treated at the end of the subpass where it is last used. -
stencilLoadOp
is a VkAttachmentLoadOp value specifying how the contents of stencil components of the attachment are treated at the beginning of the subpass where it is first used. -
stencilStoreOp
is a VkAttachmentStoreOp value specifying how the contents of stencil components of the attachment are treated at the end of the last subpass where it is used. -
initialLayout
is the layout the attachment image subresource will be in when a render pass instance begins. -
finalLayout
is the layout the attachment image subresource will be transitioned to when a render pass instance ends.
If the attachment uses a color format, then loadOp
and storeOp
are used, and stencilLoadOp
and stencilStoreOp
are ignored.
If the format has depth and/or stencil components, loadOp
and
storeOp
apply only to the depth data, while stencilLoadOp
and
stencilStoreOp
define how the stencil data is handled.
loadOp
and stencilLoadOp
define the load operations that
execute as part of the first subpass that uses the attachment.
storeOp
and stencilStoreOp
define the store operations that
execute as part of the last subpass that uses the attachment.
The load operation for each sample in an attachment happens-before any
recorded command which accesses the sample in the first subpass where the
attachment is used.
Load operations for attachments with a depth/stencil format execute in the
VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT
pipeline stage.
Load operations for attachments with a color format execute in the
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
pipeline stage.
The store operation for each sample in an attachment happens-after any
recorded command which accesses the sample in the last subpass where the
attachment is used.
Store operations for attachments with a depth/stencil format execute in the
VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT
pipeline stage.
Store operations for attachments with a color format execute in the
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
pipeline stage.
If an attachment is not used by any subpass, loadOp
, storeOp
,
stencilStoreOp
, and stencilLoadOp
will be ignored for that
attachment, and no load or store ops will be performed.
However, any transition specified by initialLayout
and
finalLayout
will still be executed.
During a render pass instance, input/color attachments with color formats
that have a component size of 8, 16, or 32 bits must be represented in the
attachment’s format throughout the instance.
Attachments with other floating- or fixed-point color formats, or with depth
components may be represented in a format with a precision higher than the
attachment format, but must be represented with the same range.
When such a component is loaded via the loadOp
, it will be converted
into an implementation-dependent format used by the render pass.
Such components must be converted from the render pass format, to the
format of the attachment, before they are resolved or stored at the end of a
render pass instance via storeOp
.
Conversions occur as described in Numeric
Representation and Computation and Fixed-Point
Data Conversions.
If flags
includes VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT
, then
the attachment is treated as if it shares physical memory with another
attachment in the same render pass.
This information limits the ability of the implementation to reorder certain
operations (like layout transitions and the loadOp
) such that it is
not improperly reordered against other uses of the same physical memory via
a different attachment.
This is described in more detail below.
If a render pass uses multiple attachments that alias the same device
memory, those attachments must each include the
VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT
bit in their attachment
description flags.
Attachments aliasing the same memory occurs in multiple ways:
-
Multiple attachments being assigned the same image view as part of framebuffer creation.
-
Attachments using distinct image views that correspond to the same image subresource of an image.
-
Attachments using views of distinct image subresources which are bound to overlapping memory ranges.
Note
Render passes must include subpass dependencies (either directly or via a
subpass dependency chain) between any two subpasses that operate on the same
attachment or aliasing attachments and those subpass dependencies must
include execution and memory dependencies separating uses of the aliases, if
at least one of those subpasses writes to one of the aliases.
These dependencies must not include the |
Multiple attachments that alias the same memory must not be used in a single subpass. A given attachment index must not be used multiple times in a single subpass, with one exception: two subpass attachments can use the same attachment index if at least one use is as an input attachment and neither use is as a resolve or preserve attachment. In other words, the same view can be used simultaneously as an input and color or depth/stencil attachment, but must not be used as multiple color or depth/stencil attachments nor as resolve or preserve attachments. The precise set of valid scenarios is described in more detail below.
If a set of attachments alias each other, then all except the first to be
used in the render pass must use an initialLayout
of
VK_IMAGE_LAYOUT_UNDEFINED
, since the earlier uses of the other aliases
make their contents undefined.
Once an alias has been used and a different alias has been used after it,
the first alias must not be used in any later subpasses.
However, an application can assign the same image view to multiple aliasing
attachment indices, which allows that image view to be used multiple times
even if other aliases are used in between.
Note
Once an attachment needs the |
Bits which can be set in VkAttachmentDescription::flags
,
describing additional properties of the attachment, are:
// Provided by VK_VERSION_1_0
typedef enum VkAttachmentDescriptionFlagBits {
VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT = 0x00000001,
} VkAttachmentDescriptionFlagBits;
-
VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT
specifies that the attachment aliases the same device memory as other attachments.
// Provided by VK_VERSION_1_0
typedef VkFlags VkAttachmentDescriptionFlags;
VkAttachmentDescriptionFlags
is a bitmask type for setting a mask of
zero or more VkAttachmentDescriptionFlagBits.
Possible values of VkAttachmentDescription::loadOp
and
stencilLoadOp
, specifying how the contents of the attachment are
treated, are:
// Provided by VK_VERSION_1_0
typedef enum VkAttachmentLoadOp {
VK_ATTACHMENT_LOAD_OP_LOAD = 0,
VK_ATTACHMENT_LOAD_OP_CLEAR = 1,
VK_ATTACHMENT_LOAD_OP_DONT_CARE = 2,
} VkAttachmentLoadOp;
-
VK_ATTACHMENT_LOAD_OP_LOAD
specifies that the previous contents of the image within the render area will be preserved. For attachments with a depth/stencil format, this uses the access typeVK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT
. For attachments with a color format, this uses the access typeVK_ACCESS_COLOR_ATTACHMENT_READ_BIT
. -
VK_ATTACHMENT_LOAD_OP_CLEAR
specifies that the contents within the render area will be cleared to a uniform value, which is specified when a render pass instance is begun. For attachments with a depth/stencil format, this uses the access typeVK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT
. For attachments with a color format, this uses the access typeVK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT
. -
VK_ATTACHMENT_LOAD_OP_DONT_CARE
specifies that the previous contents within the area need not be preserved; the contents of the attachment will be undefined inside the render area. For attachments with a depth/stencil format, this uses the access typeVK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT
. For attachments with a color format, this uses the access typeVK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT
.
Possible values of VkAttachmentDescription::storeOp
and
stencilStoreOp
, specifying how the contents of the attachment are
treated, are:
// Provided by VK_VERSION_1_0
typedef enum VkAttachmentStoreOp {
VK_ATTACHMENT_STORE_OP_STORE = 0,
VK_ATTACHMENT_STORE_OP_DONT_CARE = 1,
} VkAttachmentStoreOp;
-
VK_ATTACHMENT_STORE_OP_STORE
specifies the contents generated during the render pass and within the render area are written to memory. For attachments with a depth/stencil format, this uses the access typeVK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT
. For attachments with a color format, this uses the access typeVK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT
. -
VK_ATTACHMENT_STORE_OP_DONT_CARE
specifies the contents within the render area are not needed after rendering, and may be discarded; the contents of the attachment will be undefined inside the render area. For attachments with a depth/stencil format, this uses the access typeVK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT
. For attachments with a color format, this uses the access typeVK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT
.
Note
|
The VkSubpassDescription
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkSubpassDescription {
VkSubpassDescriptionFlags flags;
VkPipelineBindPoint pipelineBindPoint;
uint32_t inputAttachmentCount;
const VkAttachmentReference* pInputAttachments;
uint32_t colorAttachmentCount;
const VkAttachmentReference* pColorAttachments;
const VkAttachmentReference* pResolveAttachments;
const VkAttachmentReference* pDepthStencilAttachment;
uint32_t preserveAttachmentCount;
const uint32_t* pPreserveAttachments;
} VkSubpassDescription;
-
flags
is a bitmask of VkSubpassDescriptionFlagBits specifying usage of the subpass. -
pipelineBindPoint
is a VkPipelineBindPoint value specifying the pipeline type supported for this subpass. -
inputAttachmentCount
is the number of input attachments. -
pInputAttachments
is a pointer to an array of VkAttachmentReference structures defining the input attachments for this subpass and their layouts. -
colorAttachmentCount
is the number of color attachments. -
pColorAttachments
is a pointer to an array ofcolorAttachmentCount
VkAttachmentReference structures defining the color attachments for this subpass and their layouts. -
pResolveAttachments
isNULL
or a pointer to an array ofcolorAttachmentCount
VkAttachmentReference structures defining the resolve attachments for this subpass and their layouts. -
pDepthStencilAttachment
is a pointer to a VkAttachmentReference structure specifying the depth/stencil attachment for this subpass and its layout. -
preserveAttachmentCount
is the number of preserved attachments. -
pPreserveAttachments
is a pointer to an array ofpreserveAttachmentCount
render pass attachment indices identifying attachments that are not used by this subpass, but whose contents must be preserved throughout the subpass.
Each element of the pInputAttachments
array corresponds to an input
attachment index in a fragment shader, i.e. if a shader declares an image
variable decorated with a InputAttachmentIndex
value of X, then it
uses the attachment provided in pInputAttachments
[X].
Input attachments must also be bound to the pipeline in a descriptor set.
If the attachment
member of any element of pInputAttachments
is
VK_ATTACHMENT_UNUSED
, the application must not read from the
corresponding input attachment index.
Fragment shaders can use subpass input variables to access the contents of
an input attachment at the fragment’s (x, y, layer) framebuffer coordinates.
Each element of the pColorAttachments
array corresponds to an output
location in the shader, i.e. if the shader declares an output variable
decorated with a Location
value of X, then it uses the attachment
provided in pColorAttachments
[X].
If the attachment
member of any element of pColorAttachments
is
VK_ATTACHMENT_UNUSED
,
then writes to the corresponding location by a fragment shader are
discarded.
If
pResolveAttachments
is not NULL
, each of its elements corresponds to
a color attachment (the element in pColorAttachments
at the same
index), and a multisample resolve operation is defined for each attachment.
At the end of each subpass, multisample resolve operations read the
subpass’s color attachments, and resolve the samples for each pixel within
the render area to the same pixel location in the corresponding resolve
attachments, unless the resolve attachment index is
VK_ATTACHMENT_UNUSED
.
If pDepthStencilAttachment
is NULL
, or if its attachment index is
VK_ATTACHMENT_UNUSED
, it indicates that no depth/stencil attachment
will be used in the subpass.
The contents of an attachment within the render area become undefined at the start of a subpass S if all of the following conditions are true:
-
The attachment is used as a color, depth/stencil, or resolve attachment in any subpass in the render pass.
-
There is a subpass S1 that uses or preserves the attachment, and a subpass dependency from S1 to S.
-
The attachment is not used or preserved in subpass S.
Once the contents of an attachment become undefined in subpass S, they remain undefined for subpasses in subpass dependency chains starting with subpass S until they are written again. However, they remain valid for subpasses in other subpass dependency chains starting with subpass S1 if those subpasses use or preserve the attachment.
Bits which can be set in VkSubpassDescription::flags
,
specifying usage of the subpass, are:
// Provided by VK_VERSION_1_0
typedef enum VkSubpassDescriptionFlagBits {
} VkSubpassDescriptionFlagBits;
Note
All bits for this type are defined by extensions, and none of those extensions are enabled in this build of the specification. |
// Provided by VK_VERSION_1_0
typedef VkFlags VkSubpassDescriptionFlags;
VkSubpassDescriptionFlags
is a bitmask type for setting a mask of zero
or more VkSubpassDescriptionFlagBits.
The VkAttachmentReference
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkAttachmentReference {
uint32_t attachment;
VkImageLayout layout;
} VkAttachmentReference;
-
attachment
is either an integer value identifying an attachment at the corresponding index in VkRenderPassCreateInfo::pAttachments
, orVK_ATTACHMENT_UNUSED
to signify that this attachment is not used. -
layout
is a VkImageLayout value specifying the layout the attachment uses during the subpass.
VK_SUBPASS_EXTERNAL
is a special subpass index value expanding
synchronization scope outside a subpass.
It is described in more detail by VkSubpassDependency.
#define VK_SUBPASS_EXTERNAL (~0U)
The VkSubpassDependency
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkSubpassDependency {
uint32_t srcSubpass;
uint32_t dstSubpass;
VkPipelineStageFlags srcStageMask;
VkPipelineStageFlags dstStageMask;
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
VkDependencyFlags dependencyFlags;
} VkSubpassDependency;
-
srcSubpass
is the subpass index of the first subpass in the dependency, orVK_SUBPASS_EXTERNAL
. -
dstSubpass
is the subpass index of the second subpass in the dependency, orVK_SUBPASS_EXTERNAL
. -
srcStageMask
is a bitmask of VkPipelineStageFlagBits specifying the source stage mask. -
dstStageMask
is a bitmask of VkPipelineStageFlagBits specifying the destination stage mask -
srcAccessMask
is a bitmask of VkAccessFlagBits specifying a source access mask. -
dstAccessMask
is a bitmask of VkAccessFlagBits specifying a destination access mask. -
dependencyFlags
is a bitmask of VkDependencyFlagBits.
If srcSubpass
is equal to dstSubpass
then the
VkSubpassDependency describes a
subpass
self-dependency, and only constrains the pipeline barriers allowed within
a subpass instance.
Otherwise, when a render pass instance which includes a subpass dependency
is submitted to a queue, it defines a memory dependency between the
subpasses identified by srcSubpass
and dstSubpass
.
If srcSubpass
is equal to VK_SUBPASS_EXTERNAL
, the first
synchronization scope includes
commands that occur earlier in submission
order than the vkCmdBeginRenderPass used to begin the render pass
instance.
Otherwise, the first set of commands includes all commands submitted as part
of the subpass instance identified by srcSubpass
and any load, store
or multisample resolve operations on attachments used in srcSubpass
.
In either case, the first synchronization scope is limited to operations on
the pipeline stages determined by the
source stage mask specified by
srcStageMask
.
If dstSubpass
is equal to VK_SUBPASS_EXTERNAL
, the second
synchronization scope includes
commands that occur later in submission
order than the vkCmdEndRenderPass used to end the render pass
instance.
Otherwise, the second set of commands includes all commands submitted as
part of the subpass instance identified by dstSubpass
and any load,
store or multisample resolve operations on attachments used in
dstSubpass
.
In either case, the second synchronization scope is limited to operations on
the pipeline stages determined by the
destination stage mask specified
by dstStageMask
.
The first access scope is
limited to accesses in the pipeline stages determined by the
source stage mask specified by
srcStageMask
.
It is also limited to access types in the source access mask specified by srcAccessMask
.
The second access scope is
limited to accesses in the pipeline stages determined by the
destination stage mask specified
by dstStageMask
.
It is also limited to access types in the destination access mask specified by dstAccessMask
.
The availability and visibility operations defined by a subpass dependency affect the execution of image layout transitions within the render pass.
Note
For non-attachment resources, the memory dependency expressed by subpass
dependency is nearly identical to that of a VkMemoryBarrier (with
matching For attachments however, subpass dependencies work more like a
VkImageMemoryBarrier defined similarly to the VkMemoryBarrier
above, the queue family indices set to
|
If there is no subpass dependency from VK_SUBPASS_EXTERNAL
to the
first subpass that uses an attachment, then an implicit subpass dependency
exists from VK_SUBPASS_EXTERNAL
to the first subpass it is used in.
The implicit subpass dependency only exists if there exists an automatic
layout transition away from initialLayout
.
The subpass dependency operates as if defined with the following parameters:
VkSubpassDependency implicitDependency = {
.srcSubpass = VK_SUBPASS_EXTERNAL;
.dstSubpass = firstSubpass; // First subpass attachment is used in
.srcStageMask = VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT;
.dstStageMask = VK_PIPELINE_STAGE_ALL_COMMANDS_BIT;
.srcAccessMask = 0;
.dstAccessMask = VK_ACCESS_INPUT_ATTACHMENT_READ_BIT |
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT |
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT |
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT |
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
.dependencyFlags = 0;
};
Similarly, if there is no subpass dependency from the last subpass that uses
an attachment to VK_SUBPASS_EXTERNAL
, then an implicit subpass
dependency exists from the last subpass it is used in to
VK_SUBPASS_EXTERNAL
.
The implicit subpass dependency only exists if there exists an automatic
layout transition into finalLayout
.
The subpass dependency operates as if defined with the following parameters:
VkSubpassDependency implicitDependency = {
.srcSubpass = lastSubpass; // Last subpass attachment is used in
.dstSubpass = VK_SUBPASS_EXTERNAL;
.srcStageMask = VK_PIPELINE_STAGE_ALL_COMMANDS_BIT;
.dstStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT;
.srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT |
VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
.dstAccessMask = 0;
.dependencyFlags = 0;
};
As subpasses may overlap or execute out of order with regards to other subpasses unless a subpass dependency chain describes otherwise, the layout transitions required between subpasses cannot be known to an application. Instead, an application provides the layout that each attachment must be in at the start and end of a render pass, and the layout it must be in during each subpass it is used in. The implementation then must execute layout transitions between subpasses in order to guarantee that the images are in the layouts required by each subpass, and in the final layout at the end of the render pass.
Automatic layout transitions apply to the entire image subresource attached
to the framebuffer.
If
the attachment is a view of a 1D or 2D image, the automatic layout
transitions apply to the number of layers specified by
VkFramebufferCreateInfo::layers
.
Automatic layout transitions away from the layout used in a subpass
happen-after the availability operations for all dependencies with that
subpass as the srcSubpass
.
Automatic layout transitions into the layout used in a subpass happen-before
the visibility operations for all dependencies with that subpass as the
dstSubpass
.
Automatic layout transitions away from initialLayout
happen-after the
availability operations for all dependencies with a srcSubpass
equal
to VK_SUBPASS_EXTERNAL
, where dstSubpass
uses the attachment
that will be transitioned.
For attachments created with VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT
,
automatic layout transitions away from initialLayout
happen-after the
availability operations for all dependencies with a srcSubpass
equal
to VK_SUBPASS_EXTERNAL
, where dstSubpass
uses any aliased
attachment.
Automatic layout transitions into finalLayout
happen-before the
visibility operations for all dependencies with a dstSubpass
equal to
VK_SUBPASS_EXTERNAL
, where srcSubpass
uses the attachment that
will be transitioned.
For attachments created with VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT
,
automatic layout transitions into finalLayout
happen-before the
visibility operations for all dependencies with a dstSubpass
equal to
VK_SUBPASS_EXTERNAL
, where srcSubpass
uses any aliased
attachment.
If two subpasses use the same attachment, and both subpasses use the attachment in a read-only layout, no subpass dependency needs to be specified between those subpasses. If an implementation treats those layouts separately, it must insert an implicit subpass dependency between those subpasses to separate the uses in each layout. The subpass dependency operates as if defined with the following parameters:
// Used for input attachments
VkPipelineStageFlags inputAttachmentStages = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
VkAccessFlags inputAttachmentDstAccess = VK_ACCESS_INPUT_ATTACHMENT_READ_BIT;
// Used for depth/stencil attachments
VkPipelineStageFlags depthStencilAttachmentStages = VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT | VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT;
VkAccessFlags depthStencilAttachmentDstAccess = VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT;
VkSubpassDependency implicitDependency = {
.srcSubpass = firstSubpass;
.dstSubpass = secondSubpass;
.srcStageMask = inputAttachmentStages | depthStencilAttachmentStages;
.dstStageMask = inputAttachmentStages | depthStencilAttachmentStages;
.srcAccessMask = 0;
.dstAccessMask = inputAttachmentDstAccess | depthStencilAttachmentDstAccess;
.dependencyFlags = 0;
};
If a subpass uses the same attachment as both an input attachment and either a color attachment or a depth/stencil attachment, writes via the color or depth/stencil attachment are not automatically made visible to reads via the input attachment, causing a feedback loop, except in any of the following conditions:
-
If the color components or depth/stencil components read by the input attachment are mutually exclusive with the components written by the color or depth/stencil attachments, then there is no feedback loop. This requires the graphics pipelines used by the subpass to disable writes to color components that are read as inputs via the
colorWriteMask
, and to disable writes to depth/stencil components that are read as inputs viadepthWriteEnable
orstencilTestEnable
. -
If the attachment is used as an input attachment and depth/stencil attachment only, and the depth/stencil attachment is not written to.
Rendering within a subpass containing a feedback loop creates a data race, except in the following cases:
-
If a memory dependency is inserted between when the attachment is written and when it is subsequently read by later fragments. Pipeline barriers expressing a subpass self-dependency are the only way to achieve this, and one must be inserted every time a fragment will read values at a particular sample (x, y, layer, sample) coordinate, if those values have been written since the most recent pipeline barrier; or since the start of the subpass, if there have been no pipeline barriers since the start of the subpass.
Attachments have requirements for a valid image layout depending on the usage
-
An attachment used as an input attachment must be in the
VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL
,VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL
, orVK_IMAGE_LAYOUT_GENERAL
layout. -
An attachment used only as a color or resolve attachment must be in the
VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL
orVK_IMAGE_LAYOUT_GENERAL
layout. -
An attachment used as both an input attachment and as either a color attachment or a resolve attachment must be in the
VK_IMAGE_LAYOUT_GENERAL
layout. -
An attachment used only as a depth/stencil attachment must be in the
VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL
,VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL
, orVK_IMAGE_LAYOUT_GENERAL
layout. -
An attachment used as both an input attachment and as a depth/stencil attachment must be in the
VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL
, orVK_IMAGE_LAYOUT_GENERAL
layout.
An attachment must not be used as both a depth/stencil attachment and a color attachment.
To destroy a render pass, call:
// Provided by VK_VERSION_1_0
void vkDestroyRenderPass(
VkDevice device,
VkRenderPass renderPass,
const VkAllocationCallbacks* pAllocator);
-
device
is the logical device that destroys the render pass. -
renderPass
is the handle of the render pass to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
8.2. Render Pass Compatibility
Framebuffers and graphics pipelines are created based on a specific render pass object. They must only be used with that render pass object, or one compatible with it.
Two attachment references are compatible if they have matching format and
sample count, or are both VK_ATTACHMENT_UNUSED
or the pointer that
would contain the reference is NULL
.
Two arrays of attachment references are compatible if all corresponding
pairs of attachments are compatible.
If the arrays are of different lengths, attachment references not present in
the smaller array are treated as VK_ATTACHMENT_UNUSED
.
Two render passes are compatible if their corresponding color, input, resolve, and depth/stencil attachment references are compatible and if they are otherwise identical except for:
-
Initial and final image layout in attachment descriptions
-
Load and store operations in attachment descriptions
-
Image layout in attachment references
As an additional special case, if two render passes have a single subpass, the resolve attachment reference compatibility requirements are ignored.
A framebuffer is compatible with a render pass if it was created using the same render pass or a compatible render pass.
8.3. Framebuffers
Render passes operate in conjunction with framebuffers. Framebuffers represent a collection of specific memory attachments that a render pass instance uses.
Framebuffers are represented by VkFramebuffer
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkFramebuffer)
To create a framebuffer, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateFramebuffer(
VkDevice device,
const VkFramebufferCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkFramebuffer* pFramebuffer);
-
device
is the logical device that creates the framebuffer. -
pCreateInfo
is a pointer to a VkFramebufferCreateInfo structure describing additional information about framebuffer creation. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pFramebuffer
is a pointer to a VkFramebuffer handle in which the resulting framebuffer object is returned.
The VkFramebufferCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkFramebufferCreateInfo {
VkStructureType sType;
const void* pNext;
VkFramebufferCreateFlags flags;
VkRenderPass renderPass;
uint32_t attachmentCount;
const VkImageView* pAttachments;
uint32_t width;
uint32_t height;
uint32_t layers;
} VkFramebufferCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is a bitmask of VkFramebufferCreateFlagBits -
renderPass
is a render pass defining what render passes the framebuffer will be compatible with. See Render Pass Compatibility for details. -
attachmentCount
is the number of attachments. -
pAttachments
is a pointer to an array of VkImageView handles, each of which will be used as the corresponding attachment in a render pass instance. -
width
,height
andlayers
define the dimensions of the framebuffer.
Applications must ensure that all non-attachment writes to memory backing image subresources that are used as attachments in a render pass instance happen-before or happen-after the render pass instance. If an image subresource is written during a render pass instance by anything other than load operations, store operations, and layout transitions, applications must ensure that all non-attachment reads from memory backing that image subresource happen-before or happen-after the render pass instance. For depth/stencil images, the aspects are not treated independently for the above guarantees - writes to either aspect must be synchronized with accesses to the other aspect.
Note
An image subresource can be used as read-only as both an attachment and a non-attachment during a render pass instance, but care must still be taken to avoid data races with load/store operations and layout transitions. The simplest way to achieve this is to keep the non-attachment and attachment accesses within the same subpass, or to avoid layout transitions and load/store operations that perform writes. |
It is legal for a subpass to use no color or depth/stencil attachments,
either because it has no attachment references or because all of them are
VK_ATTACHMENT_UNUSED
.
This kind of subpass can use shader side effects such as image stores and
atomics to produce an output.
In this case, the subpass continues to use the width
, height
,
and layers
of the framebuffer to define the dimensions of the
rendering area, and the rasterizationSamples
from each pipeline’s
VkPipelineMultisampleStateCreateInfo to define the number of samples
used in rasterization; however, if
VkPhysicalDeviceFeatures::variableMultisampleRate
is
VK_FALSE
, then all pipelines to be bound with the subpass must have
the same value for
VkPipelineMultisampleStateCreateInfo::rasterizationSamples
.
Bits which can be set in VkFramebufferCreateInfo::flags
,
specifying options for framebuffers, are:
// Provided by VK_VERSION_1_0
typedef enum VkFramebufferCreateFlagBits {
} VkFramebufferCreateFlagBits;
Note
All bits for this type are defined by extensions, and none of those extensions are enabled in this build of the specification. |
// Provided by VK_VERSION_1_0
typedef VkFlags VkFramebufferCreateFlags;
VkFramebufferCreateFlags
is a bitmask type for setting a mask of zero
or more VkFramebufferCreateFlagBits.
To destroy a framebuffer, call:
// Provided by VK_VERSION_1_0
void vkDestroyFramebuffer(
VkDevice device,
VkFramebuffer framebuffer,
const VkAllocationCallbacks* pAllocator);
-
device
is the logical device that destroys the framebuffer. -
framebuffer
is the handle of the framebuffer to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
8.4. Render Pass Commands
An application records the commands for a render pass instance one subpass at a time, by beginning a render pass instance, iterating over the subpasses to record commands for that subpass, and then ending the render pass instance.
To begin a render pass instance, call:
// Provided by VK_VERSION_1_0
void vkCmdBeginRenderPass(
VkCommandBuffer commandBuffer,
const VkRenderPassBeginInfo* pRenderPassBegin,
VkSubpassContents contents);
-
commandBuffer
is the command buffer in which to record the command. -
pRenderPassBegin
is a pointer to a VkRenderPassBeginInfo structure specifying the render pass to begin an instance of, and the framebuffer the instance uses. -
contents
is a VkSubpassContents value specifying how the commands in the first subpass will be provided.
After beginning a render pass instance, the command buffer is ready to record the commands for the first subpass of that render pass.
The VkRenderPassBeginInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkRenderPassBeginInfo {
VkStructureType sType;
const void* pNext;
VkRenderPass renderPass;
VkFramebuffer framebuffer;
VkRect2D renderArea;
uint32_t clearValueCount;
const VkClearValue* pClearValues;
} VkRenderPassBeginInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
renderPass
is the render pass to begin an instance of. -
framebuffer
is the framebuffer containing the attachments that are used with the render pass. -
renderArea
is the render area that is affected by the render pass instance, and is described in more detail below. -
clearValueCount
is the number of elements inpClearValues
. -
pClearValues
is a pointer to an array ofclearValueCount
VkClearValue structures containing clear values for each attachment, if the attachment uses aloadOp
value ofVK_ATTACHMENT_LOAD_OP_CLEAR
or if the attachment has a depth/stencil format and uses astencilLoadOp
value ofVK_ATTACHMENT_LOAD_OP_CLEAR
. The array is indexed by attachment number. Only elements corresponding to cleared attachments are used. Other elements ofpClearValues
are ignored.
renderArea
is the render area that is affected by the render pass
instance.
The effects of attachment load, store and multisample resolve operations are
restricted to the pixels whose x and y coordinates fall within the render
area on all attachments.
The render area extends to all layers of framebuffer
.
The application must ensure (using scissor if necessary) that all rendering
is contained within the render area.
The render area must be contained within the framebuffer dimensions.
Note
There may be a performance cost for using a render area smaller than the framebuffer, unless it matches the render area granularity for the render pass. |
Possible values of vkCmdBeginRenderPass::contents
, specifying
how the commands in the first subpass will be provided, are:
// Provided by VK_VERSION_1_0
typedef enum VkSubpassContents {
VK_SUBPASS_CONTENTS_INLINE = 0,
VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS = 1,
} VkSubpassContents;
-
VK_SUBPASS_CONTENTS_INLINE
specifies that the contents of the subpass will be recorded inline in the primary command buffer, and secondary command buffers must not be executed within the subpass. -
VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS
specifies that the contents are recorded in secondary command buffers that will be called from the primary command buffer, and vkCmdExecuteCommands is the only valid command on the command buffer until vkCmdNextSubpass or vkCmdEndRenderPass.
To query the render area granularity, call:
// Provided by VK_VERSION_1_0
void vkGetRenderAreaGranularity(
VkDevice device,
VkRenderPass renderPass,
VkExtent2D* pGranularity);
-
device
is the logical device that owns the render pass. -
renderPass
is a handle to a render pass. -
pGranularity
is a pointer to a VkExtent2D structure in which the granularity is returned.
The conditions leading to an optimal renderArea
are:
-
the
offset.x
member inrenderArea
is a multiple of thewidth
member of the returned VkExtent2D (the horizontal granularity). -
the
offset.y
member inrenderArea
is a multiple of theheight
member of the returned VkExtent2D (the vertical granularity). -
either the
extent.width
member inrenderArea
is a multiple of the horizontal granularity oroffset.x
+extent.width
is equal to thewidth
of theframebuffer
in the VkRenderPassBeginInfo. -
either the
extent.height
member inrenderArea
is a multiple of the vertical granularity oroffset.y
+extent.height
is equal to theheight
of theframebuffer
in the VkRenderPassBeginInfo.
Subpass dependencies are not affected by the render area, and apply to the entire image subresources attached to the framebuffer as specified in the description of automatic layout transitions. Similarly, pipeline barriers are valid even if their effect extends outside the render area.
To transition to the next subpass in the render pass instance after recording the commands for a subpass, call:
// Provided by VK_VERSION_1_0
void vkCmdNextSubpass(
VkCommandBuffer commandBuffer,
VkSubpassContents contents);
-
commandBuffer
is the command buffer in which to record the command. -
contents
specifies how the commands in the next subpass will be provided, in the same fashion as the corresponding parameter of vkCmdBeginRenderPass.
The subpass index for a render pass begins at zero when
vkCmdBeginRenderPass
is recorded, and increments each time
vkCmdNextSubpass
is recorded.
Moving to the next subpass automatically performs any multisample resolve
operations in the subpass being ended.
End-of-subpass multisample resolves are treated as color attachment writes
for the purposes of synchronization.
That is, they are considered to execute in the
VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
pipeline stage and their
writes are synchronized with VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT
.
Synchronization between rendering within a subpass and any resolve
operations at the end of the subpass occurs automatically, without need for
explicit dependencies or pipeline barriers.
However, if the resolve attachment is also used in a different subpass, an
explicit dependency is needed.
After transitioning to the next subpass, the application can record the commands for that subpass.
To record a command to end a render pass instance after recording the commands for the last subpass, call:
// Provided by VK_VERSION_1_0
void vkCmdEndRenderPass(
VkCommandBuffer commandBuffer);
-
commandBuffer
is the command buffer in which to end the current render pass instance.
Ending a render pass instance performs any multisample resolve operations on the final subpass.
9. Shaders
A shader specifies programmable operations that execute for each vertex, control point, tessellated vertex, primitive, fragment, or workgroup in the corresponding stage(s) of the graphics and compute pipelines.
Graphics pipelines include vertex shader execution as a result of primitive assembly, followed, if enabled, by tessellation control and evaluation shaders operating on patches, geometry shaders, if enabled, operating on primitives, and fragment shaders, if present, operating on fragments generated by Rasterization. In this specification, vertex, tessellation control, tessellation evaluation and geometry shaders are collectively referred to as pre-rasterization shader stages and occur in the logical pipeline before rasterization. The fragment shader occurs logically after rasterization.
Only the compute shader stage is included in a compute pipeline. Compute shaders operate on compute invocations in a workgroup.
Shaders can read from input variables, and read from and write to output variables. Input and output variables can be used to transfer data between shader stages, or to allow the shader to interact with values that exist in the execution environment. Similarly, the execution environment provides constants describing capabilities.
Shader variables are associated with execution environment-provided inputs and outputs using built-in decorations in the shader. The available decorations for each stage are documented in the following subsections.
9.1. Shader Modules
Shader modules contain shader code and one or more entry points. Shaders are selected from a shader module by specifying an entry point as part of pipeline creation. The stages of a pipeline can use shaders that come from different modules. The shader code defining a shader module must be in the SPIR-V format, as described by the Vulkan Environment for SPIR-V appendix.
Shader modules are represented by VkShaderModule
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkShaderModule)
To create a shader module, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateShaderModule(
VkDevice device,
const VkShaderModuleCreateInfo* pCreateInfo,
const VkAllocationCallbacks* pAllocator,
VkShaderModule* pShaderModule);
-
device
is the logical device that creates the shader module. -
pCreateInfo
is a pointer to a VkShaderModuleCreateInfo structure. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pShaderModule
is a pointer to a VkShaderModule handle in which the resulting shader module object is returned.
Once a shader module has been created, any entry points it contains can be used in pipeline shader stages as described in Compute Pipelines and Graphics Pipelines.
The VkShaderModuleCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkShaderModuleCreateInfo {
VkStructureType sType;
const void* pNext;
VkShaderModuleCreateFlags flags;
size_t codeSize;
const uint32_t* pCode;
} VkShaderModuleCreateInfo;
-
sType
is the type of this structure. -
pNext
isNULL
or a pointer to a structure extending this structure. -
flags
is reserved for future use. -
codeSize
is the size, in bytes, of the code pointed to bypCode
. -
pCode
is a pointer to code that is used to create the shader module. The type and format of the code is determined from the content of the memory addressed bypCode
.
// Provided by VK_VERSION_1_0
typedef VkFlags VkShaderModuleCreateFlags;
VkShaderModuleCreateFlags
is a bitmask type for setting a mask, but is
currently reserved for future use.
To destroy a shader module, call:
// Provided by VK_VERSION_1_0
void vkDestroyShaderModule(
VkDevice device,
VkShaderModule shaderModule,
const VkAllocationCallbacks* pAllocator);
-
device
is the logical device that destroys the shader module. -
shaderModule
is the handle of the shader module to destroy. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter.
A shader module can be destroyed while pipelines created using its shaders are still in use.
9.2. Shader Execution
At each stage of the pipeline, multiple invocations of a shader may execute simultaneously. Further, invocations of a single shader produced as the result of different commands may execute simultaneously. The relative execution order of invocations of the same shader type is undefined. Shader invocations may complete in a different order than that in which the primitives they originated from were drawn or dispatched by the application. However, fragment shader outputs are written to attachments in rasterization order.
The relative execution order of invocations of different shader types is largely undefined. However, when invoking a shader whose inputs are generated from a previous pipeline stage, the shader invocations from the previous stage are guaranteed to have executed far enough to generate input values for all required inputs.
9.3. Shader Memory Access Ordering
The order in which image or buffer memory is read or written by shaders is largely undefined. For some shader types (vertex, tessellation evaluation, and in some cases, fragment), even the number of shader invocations that may perform loads and stores is undefined.
In particular, the following rules apply:
-
Vertex and tessellation evaluation shaders will be invoked at least once for each unique vertex, as defined in those sections.
-
Fragment shaders will be invoked zero or more times, as defined in that section.
-
The relative execution order of invocations of the same shader type is undefined. A store issued by a shader when working on primitive B might complete prior to a store for primitive A, even if primitive A is specified prior to primitive B. This applies even to fragment shaders; while fragment shader outputs are always written to the framebuffer in rasterization order, stores executed by fragment shader invocations are not.
-
The relative execution order of invocations of different shader types is largely undefined.
Note
The above limitations on shader invocation order make some forms of synchronization between shader invocations within a single set of primitives unimplementable. For example, having one invocation poll memory written by another invocation assumes that the other invocation has been launched and will complete its writes in finite time. |
Stores issued to different memory locations within a single shader invocation may not be visible to other invocations, or may not become visible in the order they were performed.
The OpMemoryBarrier
instruction can be used to provide stronger
ordering of reads and writes performed by a single invocation.
OpMemoryBarrier
guarantees that any memory transactions issued by the
shader invocation prior to the instruction complete prior to the memory
transactions issued after the instruction.
Memory barriers are needed for algorithms that require multiple invocations
to access the same memory and require the operations to be performed in a
partially-defined relative order.
For example, if one shader invocation does a series of writes, followed by
an OpMemoryBarrier
instruction, followed by another write, then the
results of the series of writes before the barrier become visible to other
shader invocations at a time earlier or equal to when the results of the
final write become visible to those invocations.
In practice it means that another invocation that sees the results of the
final write would also see the previous writes.
Without the memory barrier, the final write may be visible before the
previous writes.
Writes that are the result of shader stores through a variable decorated
with Coherent
automatically have available writes to the same buffer,
buffer view, or image view made visible to them, and are themselves
automatically made available to access by the same buffer, buffer view, or
image view.
Reads that are the result of shader loads through a variable decorated with
Coherent
automatically have available writes to the same buffer, buffer
view, or image view made visible to them.
The order that coherent writes to different locations become available is
undefined, unless enforced by a memory barrier instruction or other memory
dependency.
Note
Explicit memory dependencies must still be used to guarantee availability and visibility for access via other buffers, buffer views, or image views. |
The built-in atomic memory transaction instructions can be used to read and
write a given memory address atomically.
While built-in atomic functions issued by multiple shader invocations are
executed in undefined order relative to each other, these functions perform
both a read and a write of a memory address and guarantee that no other
memory transaction will write to the underlying memory between the read and
write.
Atomic operations ensure automatic availability and visibility for writes
and reads in the same way as those to Coherent
variables.
Note
Memory accesses performed on different resource descriptors with the same
memory backing may not be well-defined even with the |
Note
Atomics allow shaders to use shared global addresses for mutual exclusion or as counters, among other uses. |
The SPIR-V SubgroupMemory, CrossWorkgroupMemory, and AtomicCounterMemory memory semantics are ignored. Sequentially consistent atomics and barriers are not supported and SequentiallyConsistent is treated as AcquireRelease. SequentiallyConsistent should not be used.
9.4. Shader Inputs and Outputs
Data is passed into and out of shaders using variables with input or output
storage class, respectively.
User-defined inputs and outputs are connected between stages by matching
their Location
decorations.
Additionally, data can be provided by or communicated to special functions
provided by the execution environment using BuiltIn
decorations.
In many cases, the same BuiltIn
decoration can be used in multiple
shader stages with similar meaning.
The specific behavior of variables decorated as BuiltIn
is documented
in the following sections.
9.5. Vertex Shaders
Each vertex shader invocation operates on one vertex and its associated vertex attribute data, and outputs one vertex and associated data. Graphics pipelines must include a vertex shader, and the vertex shader stage is always the first shader stage in the graphics pipeline.
9.5.1. Vertex Shader Execution
A vertex shader must be executed at least once for each vertex specified by a drawing command. During execution, the shader is presented with the index of the vertex and instance for which it has been invoked. Input variables declared in the vertex shader are filled by the implementation with the values of vertex attributes associated with the invocation being executed.
If the same vertex is specified multiple times in a drawing command (e.g. by including the same index value multiple times in an index buffer) the implementation may reuse the results of vertex shading if it can statically determine that the vertex shader invocations will produce identical results.
Note
It is implementation-dependent when and if results of vertex shading are
reused, and thus how many times the vertex shader will be executed.
This is true also if the vertex shader contains stores or atomic operations
(see |
9.6. Tessellation Control Shaders
The tessellation control shader is used to read an input patch provided by
the application and to produce an output patch.
Each tessellation control shader invocation operates on an input patch
(after all control points in the patch are processed by a vertex shader) and
its associated data, and outputs a single control point of the output patch
and its associated data, and can also output additional per-patch data.
The input patch is sized according to the patchControlPoints
member of
VkPipelineTessellationStateCreateInfo, as part of input assembly.
The size of the output patch is controlled by the OpExecutionMode
OutputVertices
specified in the tessellation control or tessellation
evaluation shaders, which must be specified in at least one of the shaders.
The size of the input and output patches must each be greater than zero and
less than or equal to
VkPhysicalDeviceLimits
::maxTessellationPatchSize
.
9.6.1. Tessellation Control Shader Execution
A tessellation control shader is invoked at least once for each output vertex in a patch.
Inputs to the tessellation control shader are generated by the vertex
shader.
Each invocation of the tessellation control shader can read the attributes
of any incoming vertices and their associated data.
The invocations corresponding to a given patch execute logically in
parallel, with undefined relative execution order.
However, the OpControlBarrier
instruction can be used to provide
limited control of the execution order by synchronizing invocations within a
patch, effectively dividing tessellation control shader execution into a set
of phases.
Tessellation control shaders will read undefined values if one invocation
reads a per-vertex or per-patch output written by another invocation at any
point during the same phase, or if two invocations attempt to write
different values to the same per-patch output in a single phase.
9.7. Tessellation Evaluation Shaders
The Tessellation Evaluation Shader operates on an input patch of control points and their associated data, and a single input barycentric coordinate indicating the invocation’s relative position within the subdivided patch, and outputs a single vertex and its associated data.
9.8. Geometry Shaders
The geometry shader operates on a group of vertices and their associated data assembled from a single input primitive, and emits zero or more output primitives and the group of vertices and their associated data required for each output primitive.
9.8.1. Geometry Shader Execution
A geometry shader is invoked at least once for each primitive produced by the tessellation stages, or at least once for each primitive generated by primitive assembly when tessellation is not in use. A shader can request that the geometry shader runs multiple instances. A geometry shader is invoked at least once for each instance.
9.9. Fragment Shaders
Fragment shaders are invoked as a fragment operation in a graphics pipeline. Each fragment shader invocation operates on a single fragment and its associated data. With few exceptions, fragment shaders do not have access to any data associated with other fragments and are considered to execute in isolation of fragment shader invocations associated with other fragments.
9.10. Compute Shaders
Compute shaders are invoked via vkCmdDispatch and vkCmdDispatchIndirect commands. In general, they have access to similar resources as shader stages executing as part of a graphics pipeline.
Compute workloads are formed from groups of work items called workgroups and
processed by the compute shader in the current compute pipeline.
A workgroup is a collection of shader invocations that execute the same
shader, potentially in parallel.
Compute shaders execute in global workgroups which are divided into a
number of local workgroups with a size that can be set by assigning a
value to the LocalSize
execution mode or via an object decorated by the WorkgroupSize
decoration.
An invocation within a local workgroup can share data with other members of
the local workgroup through shared variables and issue memory and control
flow barriers to synchronize with other members of the local workgroup.
9.11. Interpolation Decorations
Interpolation decorations control the behavior of attribute interpolation in
the fragment shader stage.
Interpolation decorations can be applied to Input
storage class
variables in the fragment shader stage’s interface, and control the
interpolation behavior of those variables.
Inputs that could be interpolated can be decorated by at most one of the following decorations:
Fragment input variables decorated with neither Flat
nor
NoPerspective
use perspective-correct interpolation (for
lines and
polygons).
The presence of and type of interpolation is controlled by the above
interpolation decorations as well as the auxiliary decorations Centroid
and Sample
.
A variable decorated with Flat
will not be interpolated.
Instead, it will have the same value for every fragment within a triangle.
This value will come from a single provoking
vertex.
A variable decorated with Flat
can also be decorated with
Centroid
or Sample
, which will mean the same thing as decorating
it only as Flat
.
For fragment shader input variables decorated with neither Centroid
nor
Sample
, the assigned variable may be interpolated anywhere within the
fragment and a single value may be assigned to each sample within the
fragment.
If a fragment shader input is decorated with Centroid
, a single value
may be assigned to that variable for all samples in the fragment, but that
value must be interpolated to a location that lies in both the fragment and
in the primitive being rendered, including any of the fragment’s samples
covered by the primitive.
Because the location at which the variable is interpolated may be different
in neighboring fragments, and derivatives may be computed by computing
differences between neighboring fragments, derivatives of centroid-sampled
inputs may be less accurate than those for non-centroid interpolated
variables.
If a fragment shader input is decorated with Sample
, a separate value
must be assigned to that variable for each covered sample in the fragment,
and that value must be sampled at the location of the individual sample.
When rasterizationSamples
is VK_SAMPLE_COUNT_1_BIT
, the fragment
center must be used for Centroid
, Sample
, and undecorated
attribute interpolation.
Fragment shader inputs that are signed or unsigned integers, integer
vectors, or any double-precision floating-point type must be decorated with
Flat
.
9.12. Static Use
A SPIR-V module declares a global object in memory using the OpVariable
instruction, which results in a pointer x
to that object.
A specific entry point in a SPIR-V module is said to statically use that
object if that entry point’s call tree contains a function containing a
instruction with x
as an id
operand.
Static use is not used to control the behavior of variables with Input
and Output
storage.
The effects of those variables are applied based only on whether they are
present in a shader entry point’s interface.
9.13. Scope
A scope describes a set of shader invocations, where each such set is a scope instance. Each invocation belongs to one or more scope instances, but belongs to no more than one scope instance for each scope.
The operations available between invocations in a given scope instance vary, with smaller scopes generally able to perform more operations, and with greater efficiency.
9.13.1. Cross Device
All invocations executed in a Vulkan instance fall into a single cross device scope instance.
Whilst the CrossDevice
scope is defined in SPIR-V, it is disallowed in
Vulkan.
API synchronization commands can be used to
communicate between devices.
9.13.2. Device
All invocations executed on a single device form a device scope instance.
There is no method to synchronize the execution of these invocations within SPIR-V, and this can only be done with API synchronization primitives.
The scope only extends to the queue family, not the whole device.
9.13.3. Queue Family
Invocations executed by queues in a given queue family form a queue family scope instance.
This scope is identified in SPIR-V as the
Device
Scope
, which can be used as a Memory
Scope
for
barrier and atomic operations.
There is no method to synchronize the execution of these invocations within SPIR-V, and this can only be done with API synchronization primitives.
Each invocation in a queue family scope instance must be in the same device scope instance.
9.13.4. Command
Any shader invocations executed as the result of a single command such as
vkCmdDispatch or vkCmdDraw form a command scope instance.
For indirect drawing commands with drawCount
greater than one,
invocations from separate draws are in separate command scope instances.
There is no specific Scope
for communication across invocations in a
command scope instance.
As this has a clear boundary at the API level, coordination here can be
performed in the API, rather than in SPIR-V.
Each invocation in a command scope instance must be in the same queue-family scope instance.
For shaders without defined workgroups, this set of invocations forms an invocation group as defined in the SPIR-V specification.
9.13.5. Primitive
Any fragment shader invocations executed as the result of rasterization of a single primitive form a primitive scope instance.
There is no specific Scope
for communication across invocations in a
primitive scope instance.
Any generated helper invocations are included in this scope instance.
Each invocation in a primitive scope instance must be in the same command scope instance.
Any input variables decorated with Flat
are uniform within a primitive
scope instance.
9.13.6. Workgroup
A local workgroup is a set of invocations that can synchronize and share
data with each other using memory in the Workgroup
storage class.
The Workgroup
Scope
can be used as both an Execution
Scope
and Memory
Scope
for barrier and atomic operations.
Each invocation in a local workgroup must be in the same command scope instance.
Only compute shaders have defined workgroups - other shader types cannot use workgroup functionality. For shaders that have defined workgroups, this set of invocations forms an invocation group as defined in the SPIR-V specification.
9.13.7. Quad
A quad scope instance is formed of four shader invocations.
In a fragment shader, each invocation in a quad scope instance is formed of invocations in neighboring framebuffer locations (xi, yi), where:
-
i is the index of the invocation within the scope instance.
-
w and h are the number of pixels the fragment covers in the x and y axes.
-
w and h are identical for all participating invocations.
-
(x0) = (x1 - w) = (x2) = (x3 - w)
-
(y0) = (y1) = (y2 - h) = (y3 - h)
-
Each invocation has the same layer and sample indices.
The specific set of invocations that make up a quad scope instance in other shader stages is undefined.
In a fragment shader, each invocation in a quad scope instance must be in the same primitive scope instance.
For shaders that have defined workgroups, each invocation in a quad scope instance must be in the same local workgroup.
In other shader stages, each invocation in a quad scope instance must be in the same device scope instance.
Fragment shaders have defined quad scope instances.
9.13.8. Invocation
The smallest scope is a single invocation; this is represented by the
Invocation
Scope
in SPIR-V.
Fragment shader invocations must be in a primitive scope instance.
Invocations in shaders that have defined workgroups must be in a local workgroup.
Invocations in shaders that have a defined quad scope must be in a quad scope instance.
All invocations in all stages must be in a command scope instance.
9.14. Derivative Operations
Derivative operations calculate the partial derivative for an expression P as a function of an invocation’s x and y coordinates.
Derivative operations operate on a set of invocations known as a derivative group as defined in the SPIR-V specification. A derivative group is equivalent to the primitive scope instance for a fragment shader invocation.
Derivatives are calculated assuming that P is piecewise linear and
continuous within the derivative group.
All dynamic instances of explicit derivative instructions (OpDPdx*
,
OpDPdy*
, and OpFwidth*
) must be executed in control flow that is
uniform within a derivative group.
For other derivative operations, results are undefined if a dynamic
instance is executed in control flow that is not uniform within the
derivative group.
Fragment shaders that statically execute derivative operations must launch sufficient invocations to ensure their correct operation; additional helper invocations are launched for framebuffer locations not covered by rasterized fragments if necessary.
Derivative operations calculate their results as the difference between the
result of P across invocations in the quad.
For fine derivative operations (OpDPdxFine
and OpDPdyFine
), the
values of DPdx(Pi) are calculated as
-
DPdx(P0) = DPdx(P1) = P1 - P0
-
DPdx(P2) = DPdx(P3) = P3 - P2
and the values of DPdy(Pi) are calculated as
-
DPdy(P0) = DPdy(P2) = P2 - P0
-
DPdy(P1) = DPdy(P3) = P3 - P1
where i is the index of each invocation as described in Quad.
Coarse derivative operations (OpDPdxCoarse
and OpDPdyCoarse
),
calculate their results in roughly the same manner, but may only calculate
two values instead of four (one for each of DPdx and DPdy),
reusing the same result no matter the originating invocation.
If an implementation does this, it should use the fine derivative
calculations described for P0.
Note
Derivative values are calculated between fragments rather than pixels. If the fragment shader invocations involved in the calculation cover multiple pixels, these operations cover a wider area, resulting in larger derivative values. This in turn will result in a coarser level of detail being selected for image sampling operations using derivatives. Applications may want to account for this when using multi-pixel fragments; if pixel derivatives are desired, applications should use explicit derivative operations and divide the results by the size of the fragment in each dimension as follows:
where w and h are the size of the fragments in the quad, and DPdx(Pn)' and DPdy(Pn)' are the pixel derivatives. |
The results for OpDPdx
and OpDPdy
may be calculated as either
fine or coarse derivatives, with implementations favouring the most
efficient approach.
Implementations must choose coarse or fine consistently between the two.
Executing OpFwidthFine
, OpFwidthCoarse
, or OpFwidth
is
equivalent to executing the corresponding OpDPdx*
and OpDPdy*
instructions, taking the absolute value of the results, and summing them.
Executing an OpImage*Sample*ImplicitLod
instruction is equivalent to
executing OpDPdx
(Coordinate
) and OpDPdy
(Coordinate
), and
passing the results as the Grad
operands dx
and dy
.
Note
It is expected that using the |
9.15. Helper Invocations
When performing derivative
operations in a fragment shader, additional invocations may be spawned in
order to ensure correct results.
These additional invocations are known as helper invocations and can be
identified by a non-zero value in the HelperInvocation
built-in.
Stores and atomics performed by helper invocations must not have any effect
on memory, and values returned by atomic instructions in helper invocations
are undefined.
For group operations other than derivative operations, helper invocations may be treated as inactive even if they would be considered otherwise active.
10. Pipelines
The following figure shows a block diagram of the Vulkan pipelines. Some Vulkan commands specify geometric objects to be drawn or computational work to be performed, while others specify state controlling how objects are handled by the various pipeline stages, or control data transfer between memory organized as images and buffers. Commands are effectively sent through a processing pipeline, either a graphics pipeline, or a compute pipeline.
The first stage of the graphics pipeline (Input Assembler) assembles vertices to form geometric primitives such as points, lines, and triangles, based on a requested primitive topology. In the next stage (Vertex Shader) vertices can be transformed, computing positions and attributes for each vertex. If tessellation and/or geometry shaders are supported, they can then generate multiple primitives from a single input primitive, possibly changing the primitive topology or generating additional attribute data in the process.
The final resulting primitives are clipped to a clip volume in preparation for the next stage, Rasterization. The rasterizer produces a series of fragments associated with a region of the framebuffer, from a two-dimensional description of a point, line segment, or triangle. These fragments are processed by fragment operations to determine whether generated values will be written to the framebuffer. Fragment shading determines the values to be written to the framebuffer attachments. Framebuffer operations then read and write the color and depth/stencil attachments of the framebuffer for a given subpass of a render pass instance. The attachments can be used as input attachments in the fragment shader in a later subpass of the same render pass.
The compute pipeline is a separate pipeline from the graphics pipeline, which operates on one-, two-, or three-dimensional workgroups which can read from and write to buffer and image memory.
This ordering is meant only as a tool for describing Vulkan, not as a strict rule of how Vulkan is implemented, and we present it only as a means to organize the various operations of the pipelines. Actual ordering guarantees between pipeline stages are explained in detail in the synchronization chapter.
Each pipeline is controlled by a monolithic object created from a description of all of the shader stages and any relevant fixed-function stages. Linking the whole pipeline together allows the optimization of shaders based on their input/outputs and eliminates expensive draw time state validation.
A pipeline object is bound to the current state using vkCmdBindPipeline. Any pipeline object state that is specified as dynamic is not applied to the current state when the pipeline object is bound, but is instead set by dynamic state setting commands.
No state, including dynamic state, is inherited from one command buffer to another.
Compute,
and graphics pipelines are each represented by VkPipeline
handles:
// Provided by VK_VERSION_1_0
VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkPipeline)
10.1. Compute Pipelines
Compute pipelines consist of a single static compute shader stage and the pipeline layout.
The compute pipeline represents a compute shader and is created by calling
vkCreateComputePipelines
with module
and pName
selecting
an entry point from a shader module, where that entry point defines a valid
compute shader, in the VkPipelineShaderStageCreateInfo structure
contained within the VkComputePipelineCreateInfo structure.
To create compute pipelines, call:
// Provided by VK_VERSION_1_0
VkResult vkCreateComputePipelines(
VkDevice device,
VkPipelineCache pipelineCache,
uint32_t createInfoCount,
const VkComputePipelineCreateInfo* pCreateInfos,
const VkAllocationCallbacks* pAllocator,
VkPipeline* pPipelines);
-
device
is the logical device that creates the compute pipelines. -
pipelineCache
is either VK_NULL_HANDLE, indicating that pipeline caching is disabled; or the handle of a valid pipeline cache object, in which case use of that cache is enabled for the duration of the command. -
createInfoCount
is the length of thepCreateInfos
andpPipelines
arrays. -
pCreateInfos
is a pointer to an array of VkComputePipelineCreateInfo structures. -
pAllocator
controls host memory allocation as described in the Memory Allocation chapter. -
pPipelines
is a pointer to an array of VkPipeline handles in which the resulting compute pipeline objects are returned.