Copyright 20132018 The Khronos Group Inc.
This specification is protected by copyright laws and contains material proprietary to Khronos. Except as described by these terms, it or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast or otherwise exploited in any manner without the express prior written permission of Khronos.
This specification has been created under the Khronos Intellectual Property Rights Policy, which is Attachment A of the Khronos Group Membership Agreement available at www.khronos.org/files/member_agreement.pdf. Khronos Group grants a conditional copyright license to use and reproduce the unmodified specification for any purpose, without fee or royalty, EXCEPT no licenses to any patent, trademark or other intellectual property rights are granted under these terms. Parties desiring to implement the specification and make use of Khronos trademarks in relation to that implementation, and receive reciprocal patent license protection under the Khronos IP Policy must become Adopters and confirm the implementation as conformant under the process defined by Khronos for this specification; see https://www.khronos.org/adopters.
Khronos makes no, and expressly disclaims any, representations or warranties, express or implied, regarding this specification, including, without limitation: merchantability, fitness for a particular purpose, noninfringement of any intellectual property, correctness, accuracy, completeness, timeliness, and reliability. Under no circumstances will Khronos, or any of its Promoters, Contributors or Members, or their respective partners, officers, directors, employees, agents or representatives be liable for any damages, whether direct, indirect, special or consequential damages for lost revenues, lost profits, or otherwise, arising from or in connection with these materials.
Khronos is a registered trademark, and OpenVX is a trademark of The Khronos Group Inc. OpenCL is a trademark of Apple Inc., used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.
1. Introduction
1.1. Abstract
OpenVX is a lowlevel programming framework domain to enable software developers to efficiently access computer vision hardware acceleration with both functional and performance portability. OpenVX has been designed to support modern hardware architectures, such as mobile and embedded SoCs as well as desktop systems. Many of these systems are parallel and heterogeneous: containing multiple processor types including multicore CPUs, DSP subsystems, GPUs, dedicated vision computing fabrics as well as hardwired functionality. Additionally, vision system memory hierarchies can often be complex, distributed, and not fully coherent. OpenVX is designed to maximize functional and performance portability across these diverse hardware platforms, providing a computer vision framework that efficiently addresses current and future hardware architectures with minimal impact on applications.
OpenVX contains:

a library of predefined and customizable vision functions,

a graphbased execution model to combine function enabling both task and dataindependent execution, and;

a set of memory objects that abstract the physical memory.
OpenVX defines a C Application Programming Interface (API) for building, verifying, and coordinating graph execution, as well as for accessing memory objects. The graph abstraction enables OpenVX implementers to optimize the execution of the graph for the underlying acceleration architecture.
OpenVX also defines the vxu
utility library, which exposes each OpenVX
predefined function as a directly callable C function, without the need for
first creating a graph.
Applications built using the vxu
library do not benefit from the
optimizations enabled by graphs; however, the vxu library can be useful as
the simplest way to use OpenVX and as first step in porting existing vision
applications.
As the computer vision domain is still rapidly evolving, OpenVX provides an extensibility mechanism to enable developerdefined functions to be added to the application graph.
1.2. Purpose
The purpose of this document is to detail the Application Programming Interface (API) for OpenVX.
1.3. Scope of Specification
The document contains the definition of the OpenVX API. The conformance tests that are used to determine whether an implementation is consistent to this specification are defined separately.
1.4. Normative References
The section “Module Documentation” forms the normative part of the specification. Each API definition provided in that chapter has certain preconditions and post conditions specified that are normative. If these normative conditions are not met, the behavior of the function is undefined.
1.5. Version/Change History

OpenVX 1.0 Provisional  November, 2013

OpenVX 1.0 Provisional V2  June, 2014

OpenVX 1.0  September 2014

OpenVX 1.0.1  April 2015

OpenVX 1.1  May 2016

OpenVX 1.2  May 2017

OpenVX 1.2.1  May 2018
1.6. Deprecation
Certain items that are deprecated through the evolution of this specification document are removed from it. However, to provide a backward compatibility for such items for a certain time period these items are made available via a compatibility header file available with the release of this specification document (VX/vx_compatibility.h). The items listed in this compatibility header file are temporary only and are removed permanently when the backward compatibility is no longer supported for those items.
1.7. Requirements Language
In this specification, the words shall or must express a requirement that is binding, should expresses design goals or recommended actions, and may expresses an allowed behavior.
1.8. Typographical Conventions
The following typographical conventions are used in this specification.

Bold words indicate warnings or strongly communicated concepts that are intended to draw attention to the text.

Monospace
words signify an API element (i.e., class, function, structure) or a filename. 
Italics denote an emphasis on a particular concept, an abstraction of a concept, or signify an argument, parameter, or member.

Throughout this specification, code examples given to highlight a particular issue use the format as shown below:
/* Example Code Section */ int main(int argc, char *argv[]) { return 0; }

Some “mscgen” message diagrams are included in this specification. The graphical conventions for this tool can be found on its website.
1.8.1. Naming Conventions
The following naming conventions are used in this specification.

Opaque objects and atomics are named as
vx_object
, e.g.,vx_image
orvx_uint8
, with an underscore separating the object name from the “vx” prefix. 
Defined Structures are named as
vx_struct_t
, e.g.,vx_imagepatch_addressing_t
, with underscores separating the structure from the “vx” prefix and a “t” to denote that it is a structure. 
Defined Enumerations are named as
vx_enum_e
, e.g.,vx_type_e
, with underscores separating the enumeration from the “vx” prefix and an “e” to denote that it is an enumerated value. 
Application Programming Interfaces are named
vxsomeFunction
() using camel case, starting with lowercase, and no underscores, e.g.,vxCreateContext()
. 
Vision functions also have a naming convention that follows a lowercase, inverse dotted hierarchy similar to Java Packages, e.g.,
"org.khronos.openvx.color_convert"
This minimizes the possibility of name collisions and promotes sorting and readability when querying the namespace of available vision functions. Each vision function should have a unique dotted name of the style: tld.vendor.library.function. The hierarchy of such vision function namespaces is undefined outside the subdomain “org.khronos”, but they do follow existing international standards. For OpenVXspecified vision functions, the “function” section of the unique name does not use camel case and uses underscores to separate words.
1.8.2. Vendor Naming Conventions
The following naming conventions are to be used for vendor specific extensions.

Opaque objects and atomics are named as
vx_object_vendor
, e.g.,vx_ref_array_acme
, with an underscore separating the vendor name from the object name. 
Defined Structures are named as
vx_struct_vendor_t
, e.g.,vx_mdview_acme_t
, with an underscore separating the vendor from the structure name and a “t” to denote that it is a structure. 
Defined Enumerations are named as
vx_enum_vendor_e
, e.g.,vx_convolution_name_acme_e
, with an underscores separating the vendor from the enumeration name and an “e” to denote that it is an enumerated value. 
Defined Enumeration values are named as
VX_ENUMVALUE_VENDOR
, e.g.,VX_PARAM_STRUCT_ATTRIBUTE_SIZE_ACME
using only capital letters staring with the “VX” prefix, and underscores separating the words. 
Application Programming Interfaces are named
vxSomeFunctionVendor
() using camel case, starting with lowercase, and no underscores, e.g.,vxCreateRefArrayAcme()
.
1.9. Glossary and Acronyms
 Atomic

The specification mentions atomics, which means a C primitive data type. Usages that have additional wording, such as atomic operations do not carry this meaning.
 API

Application Programming Interface that specifies how a software component interacts with another.
 Framework

A generic software abstraction in which users can override behaviors to produce applicationspecific functionality.
 Engine

A purposespecific software abstraction that is tunable by users.
 Runtime

The execution phase of a program.
 Kernel

OpenVX uses the term kernel to mean an abstract computer vision function, not an Operating System kernel. Kernel may also refer to a set of convolution coefficients in some computer vision literature (e.g., the Sobel “kernel”). OpenVX does not use this meaning. OpenCL uses kernel (specifically
cl_kernel
) to qualify a function written in “CL” which the OpenCL may invoke directly. This is close to the meaning OpenVX uses; however, OpenVX does not define a language.
1.10. Acknowledgements
This specification would not be possible without the contributions from this partial list of the following individuals from the Khronos Working Group and the companies that they represented at the time:

Erik Rainey  Amazon

Radhakrishna Giduthuri  AMD

Mikael BourgesSevenier  Aptina Imaging Corporation

Dave Schreiner  ARM Limited

Renato Grottesi  ARM Limited

HansPeter Nilsson  Axis Communications

Amit Shoham  BDTi

Frank Brill  Cadence Design Systems

Thierry Lepley  Cadence Design Systems

Shorin Kyo  Huawei

Paul Buxton  Imagination Technologies

Steve Ramm  Imagination Technologies

Ben Ashbaugh  Intel

Mostafa Hagog  Intel

Andrey Kamaev  Intel

Yaniv klein  Intel

Andy Kuzma  Intel

Tomer Schwartz  Intel

Alexander Alekhin  Itseez

Roman Donchenko  Itseez

Victor Erukhimov  Itseez

Vadim Pisarevsky  Itseez

Vlad Vinogradov  Itseez

Cormac Brick  Movidius Ltd

Anshu Arya  MulticoreWare

Shervin Emami  NVIDIA

Kari Pulli  NVIDIA

Neil Trevett  NVIDIA

Daniel Laroche  NXP Semiconductors

Susheel Gautam  QUALCOMM

Doug Knisely  QUALCOMM

Tao Zhang  QUALCOMM

Yuki Kobayashi  Renesas Electronics

Andrew Garrard  Samsung Electronics

Erez Natan  Samsung Electronics

Tomer Yanir  Samsung Electronics

ChangHyo Yu  Samsung Electronics

Olivier Pothier  STMicroelectronics International NV

Chris Tseng  Texas Instruments, Inc.

Jesse Villareal  Texas Instruments, Inc.

Jiechao Nie  Verisilicon.Inc.

Shehrzad Qureshi  Verisilicon.Inc.

Xin Wang  Verisilicon.Inc.

Stephen Neuendorffer  Xilinx, Inc.
2. Design Overview
2.1. Software Landscape
OpenVX is intended to be used either directly by applications or as the acceleration layer for higherlevel vision frameworks, engines or platform APIs.
2.2. Design Objectives
OpenVX is designed as a framework of standardized computer vision functions able to run on a wide variety of platforms and potentially to be accelerated by a vendor’s implementation on that platform. OpenVX can improve the performance and efficiency of vision applications by providing an abstraction for commonlyused vision functions and an abstraction for aggregations of functions (a “graph”), thereby providing the implementer the opportunity to minimize the runtime overhead.
The functions in OpenVX are intended to cover common functionality required by many vision applications.
2.2.1. Hardware Optimizations
This specification makes no statements as to which acceleration methodology or techniques may be used in its implementation. Vendors may choose any number of implementation methods such as parallelism and/or specialized hardware offload techniques.
This specification also makes no statement or requirements on a “level of performance” as this may vary significantly across platforms and use cases.
2.2.2. Hardware Limitations
The OpenVX focuses on vision functions that can be significantly accelerated by diverse hardware. Future versions of this specification may adopt additional vision functions into the core standard when hardware acceleration for those functions becomes practical.
2.3. Assumptions
2.3.1. Portability
OpenVX has been designed to maximize functional and performance portability wherever possible, while recognizing that the API is intended to be used on a wide diversity of devices with specific constraints and properties. Tradeoffs are made for portability where possible: for example, portable Graphs constructed using this API should work on any OpenVX implementation and return similar results within the precision bounds defined by the OpenVX conformance tests.
2.3.2. Opaqueness
OpenVX is intended to address a very broad range of devices and platforms, from deeply embedded systems to desktop machines and distributed computing architectures. The OpenVX API addresses this range of possible implementations without forcing hardwarespecific requirements onto any particular implementation via the use of opaque objects for most program data.
All data, except clientfacing structures, are opaque and hidden behind a reference that may be as thin or thick as an implementation needs. Each implementation provides the standardized interfaces for accessing data that takes care of specialized hardware, platform, or allocation requirements. Memory that is imported or shared from other APIs is not subsumed by OpenVX and is still maintained and accessible by the originator.
OpenVX does not dictate any requirements on memory allocation methods or the layout of opaque memory objects and it does not dictate byte packing or alignment for structures on architectures.
2.4. ObjectOriented Behaviors
OpenVX objects are both strongly typed at compiletime for safety critical
applications and are strongly typed at runtime for dynamic applications.
Each object has its typedef’d type and its associated enumerated value in
the vx_type_e
list.
Any object may be downcast to a vx_reference
safely to be used in
functions that require this, specifically vxQueryReference
, which can
be used to get the vx_type_e
value using an vx_enum
.
2.5. OpenVX Framework Objects
This specification defines the following OpenVX framework objects.

Object: Context  The OpenVX context is the object domain for all OpenVX objects. All data objects live in the context as well as all framework objects. The OpenVX context keeps reference counts on all objects and must do garbage collection during its deconstruction to free lost references. While multiple clients may connect to the OpenVX context, all data are private in that the references that refer to data objects are given only to the creating party. The results of calling an OpenVX function on data objects created in different contexts are undefined.

Object: Kernel  A Kernel in OpenVX is the abstract representation of a computer vision function, such as a “Sobel Gradient” or “Lucas Kanade Feature Tracking”. A vision function may implement many similar or identical features from other functions, but it is still considered a single, unique kernel as long as it is named by the same string and enumeration and conforms to the results specified by OpenVX. Kernels are similar to function signatures in this regard.

Object: Parameter  An abstract input, output, or bidirectional data object passed to a computer vision function. This object contains the signature of that parameter’s usage from the kernel description. This information includes:

Signature Index  The numbered index of the parameter in the signature.

Object Type  e.g.
VX_TYPE_IMAGE
, orVX_TYPE_ARRAY
, or some other object type fromvx_type_e
. 
Usage Model  e.g.
VX_INPUT
,VX_OUTPUT
, orVX_BIDIRECTIONAL
. 
Presence State  e.g.
VX_PARAMETER_STATE_REQUIRED
, orVX_PARAMETER_STATE_OPTIONAL
.


Object: Node  A node is an instance of a kernel that will be paired with a specific set of references (the parameters). Nodes are created from and associated with a single graph only. When a
vx_parameter
is extracted from a Node, an additional attribute can be accessed:
Reference  The
vx_reference
assigned to this parameter index from the Node creation function (e.g.,vxSobel3x3Node
).


Object: Graph  A set of nodes connected in a directed (only goes oneway) acyclic (does not loop back) fashion. A Graph may have sets of Nodes that are unconnected to other sets of Nodes within the same Graph. See Graph Formalisms.
2.6. OpenVX Data Objects
Data objects are object that are processed by graphs in nodes.

Object: Array An opaque array object that could be an array of primitive data types or an array of structures.

Object: Convolution An opaque object that contains an M × N matrix of
vx_int16
values. Also contains a scaling factor for normalization. Used specifically withvxuConvolve
andvxConvolveNode
. 
Object: Delay An opaque object that contains a manually controlled, temporallydelayed list of objects.

Object: Distribution An opaque object that contains a frequency distribution (e.g., a histogram).

Object: Image An opaque image object that may be some format in
vx_df_image_e
. 
Object: LUT An opaque lookup table object used with
vxTableLookupNode
andvxuTableLookup
. 
Object: Matrix An opaque object that contains an M × N matrix of some scalar values.

Object: Pyramid An opaque object that contains multiple levels of scaled
vx_image
objects. 
Object: Remap An opaque object that contains the map of source points to destination points used to transform images.

Object: Scalar An opaque object that contains a single primitive data type.

Object: Threshold An opaque object that contains the thresholding configuration.

Object: ObjectArray An opaque array object that could be an array of any dataobject (not datatype) of OpenVX except Delay and ObjectArray objects.

Object: Tensor An opaque multidimensional data object. Used in functions like
vxHOGFeaturesNode
,vxHOGCellsNode
and the Neural Networks extension.
2.7. Error Objects
Error objects are specialized objects that may be returned from other object creator functions when serious platform issue occur (i.e., out of memory or out of handles). These can be checked at the time of creation of these objects, but checking also may be putoff until usage in other APIs or verification time, in which case, the implementation must return appropriate errors to indicate that an invalid object type was used.
vx_<object> obj = vxCreate<Object>(context, ...);
vx_status status = vxGetStatus((vx_reference)obj);
if (status == VX_SUCCESS) {
// object is good
}
2.8. Graphs Concepts
The graph is the central computation concept of OpenVX. The purpose of using graphs to express the Computer Vision problem is to allow for the possibility of any implementation to maximize its optimization potential because all the operations of the graph and its dependencies are known ahead of time, before the graph is processed.
Graphs are composed of one or more nodes that are added to the graph through node creation functions. Graphs in OpenVX must be created ahead of processing time and verified by the implementation, after which they can be processed as many times as needed.
2.8.1. Linking Nodes
Graph Nodes are linked together via data dependencies with no explicitlystated ordering. The same reference may be linked to other nodes. Linking has a limitation, however, in that only one node in a graph may output to any specific data object reference. That is, only a single writer of an object may exist in a given graph. This prevents indeterminate ordering from data dependencies. All writers in a graph shall produce output data before any reader of that data accesses it.
2.8.2. Virtual Data Objects
Graphs in OpenVX depend on data objects to link together nodes.
When clients of OpenVX know that they do not need access to these
intermediate data objects, they may be created as virtual
.
Virtual data objects can be used in the same manner as nonvirtual data
objects to link nodes of a graph together; however, virtual data objects are
different in the following respects.

Inaccessible  No calls to an Map/Unmap or Copy APIs shall succeed given a reference to an object created through a virtual create function from a Graph external perspective. Calls to Map/Unmap or Copy APIs from within clientdefined node that belongs to the same graph as the virtual object will succeed as they are Graph internal.

Scoped  Virtual data objects are scoped within the Graph in which they are created; they cannot be shared outside their scope. The live range of the data content of a virtual data object is limited to a single graph execution. In other word, data content of a virtual object is undefined before graph execution and no data of a virtual object should be expected to be preserved across successive graph executions by the application.

Intermediates  Virtual data objects should be used only for intermediate operations within Graphs, because they are fundamentally inaccessible to clients of the API.

Dimensionless or Formatless  Virtual data objects may have dimensions and formats partially or fully undefined at creation time. For instance, a virtual image can be created with undefined or partially defined dimensions (0x0, Nx0 or 0xN where N is not null) and/or without defined format (
VX_DF_IMAGE_VIRT
). The undefined property of the virtual object at creation time is undefined with regard to the graph and mutable at graph verification time; it will be automatically adjusted at each graph verification, deduced from the node that outputs the virtual object. Dimensions and format properties that are well defined at virtual object creation time are immutable and can’t be adjusted automatically at graph verification time. 
Attributes  Even if a given Virtual data object does not have its dimensionality or format completely defined, these attributes may still be queried. If queried before the object participates in a graph verification, the attribute value returned is what the user provided (e.g., “0” for the dimension). If queried after graph verification (or reverification), the attribute value returned will be the value determined by the graph verification rules.

The Dimensionless or Formatless aspect of virtual data is a commodity that allows creating graphs generic with regard to dimensions or format, but there are restrictions:

Nodes may require the dimensions and/or the format to be defined for a virtual output object when it can’t be deduced from its other parameters. For example, a Scale node requires well defined dimensions for the output image, while ColorConvert and ChannelCombine nodes require a well defined format for the output image.

An image created from ROI must always be well defined (
vx_rectangle_t
parameter) and can’t be created from a dimensionless virtual image. 
A ROI of a formatless virtual image shouldn’t be a node output.

A tensor created from View must always be well defined and can’t be created from a dimensionless virtual tensor.

A view of a formatless virtual tensor shouldn’t be a node output.

Levels of a dimensionless or formatless virtual pyramid shouldn’t be a node output.


Inheritance  A subobject inherits from the virtual property of its parent. A subobject also inherits from the Dimensionless or Formatless property of its parent with restrictions:

it is adjusted automatically at graph verification when the parent properties are adjusted (the parent is the output of a node)

it can’t be adjusted at graph verification when the subobject is itself the output of a node.


Optimizations  Virtual data objects do not have to be created during Graph validation and execution and therefore may be of zero size.
These restrictions enable vendors the ability to optimize some aspects of the data object or its usage. Some vendors may not allocate such objects, some may create intermediate subobjects of the object, and some may allocate the object on remote, inaccessible memories. OpenVX does not proscribe which optimization the vendor does, merely that it may happen.
2.8.3. Node Parameters
Parameters to node creation functions are defined as either atomic types,
such as vx_int32
, vx_enum
, or as objects, such as
vx_scalar
, vx_image
.
The atomic variables of the Node creation functions shall be converted by
the framework into vx_scalar
references for use by the Nodes.
A node parameter of type vx_scalar
can be changed during the graph
execution; whereas, a node parameter of an atomic type (vx_int32
etc.)
require at least a graph revalidation if changed.
All node parameter objects may be modified by retrieving the reference to
the vx_parameter
via vxGetParameterByIndex
, and then passing
that to vxQueryParameter
to retrieve the reference to the object.
vx_parameter param = vxGetParameterByIndex(node, p);
vx_reference ref;
vxQueryParameter(param, VX_PARAMETER_REF, &ref, sizeof(ref));
If the type of the parameter is unknown, it may be retrieved with the same function.
vx_enum type;
vxQueryParameter(param, VX_PARAMETER_TYPE, &type, sizeof(type));
/* cast the ref to the correct vx_<type>. Atomics are now vx_scalar */
2.8.4. Graph Parameters
Parameters may exist on Graphs, as well.
These parameters are defined by the author of the Graph and each Graph
parameter is defined as a specific parameter from a Node within the Graph
using vxAddParameterToGraph
.
Graph parameters communicate to the implementation that there are specific
Node parameters that may be modified by the client between Graph executions.
Additionally, they are parameters that the client may set without the
reference to the Node but with the reference to the Graph using
vxSetGraphParameterByIndex
.
This allows for the Graph authors to construct Graph Factories.
How these factories work falls outside the scope of this document.
2.8.5. Execution Model
Graphs must execute in both:

Synchronous blocking mode (in that
vxProcessGraph
will block until the graph has completed), and in 
Asynchronous singleissueperreference mode (via
vxScheduleGraph
andvxWaitGraph
).
Asynchronous Mode
In asynchronous mode, Graphs must be singleissueperreference. This means that given a constructed graph reference G, it may be scheduled multiple times but only executes sequentially with respect to itself. Multiple graphs references given to the asynchronous graph interface do not have a defined behavior and may execute in parallel or in series based on the behavior or the vendor’s implementation.
2.8.6. Graph Formalisms
To use graphs several rules must be put in place to allow deterministic
execution of Graphs.
The behavior of a processGraph
(G) call is determined by the
structure of the Processing Graph G.
The Processing Graph is a bipartite graph consisting of a set of Nodes
N_{1} … N_{n} and a set of data objects d_{1} … d_{i}.
Each edge (N_{x},D_{y}) in the graph represents a data object D_{y}
that is written by Node N_{x} and each edge (D_{x},N_{y})
represents a data object D_{x} that is read by Node N_{y}.
Each edge e has a name Name
(e), which gives the parameter name
of the node that references the corresponding data object.
Each Node Parameter also has a type Type(node, name)
in {INPUT, OUTPUT,
INOUT}
.
Some data objects are Virtual, and some data objects are Delay.
Delay data objects are just collections of data objects with indexing (like
an image list) and known linking points in a graph.
A node may be classified as a head node, which has no backward dependency.
Alternatively, a node may be a dependent node, which has a backward
dependency to the head node.
In addition, the Processing Graph has several restrictions:

Output typing  Every output edge (N_{x},D_{y}) requires
Type
(N_{x},Name
(N_{x},D_{y})) in{OUTPUT, INOUT}

Input typing  Every input edge (N_{x},D_{y}) requires
Type
(N_{y},Name
(D_{x},N_{y})) in{INPUT}
or{INOUT}

Single Writer  Every data object is the target of at most one output edge.

Broken Cycles  Every cycle in G must contain at least input edge (D_{x},N_{y}) where D_{x} is Delay.

Virtual images must have a source  If D_{y} is Virtual, then there is at least one output edge that writes D_{y}(N_{x},D_{y})

Bidirectional data objects shall not be virtual  If
Type
(N_{x},Name
(N_{x},D_{y})) isINOUT
implies D_{y} is nonVirtual. 
Delay data objects shall not be virtual  If D_{x} is Delay then it shall not be Virtual.

A uniform image cannot be output or bidirectional.
The execution of each node in a graph consists of an atomic operation (sometimes referred to as firing) that consumes data representing each input data object, processes it, and produces data representing each output data object. A node may execute when all of its input edges are marked present. Before the graph executes, the following initial marking is used:

All input edges (D_{x},N_{y}) from nonVirtual objects D_{x} are marked (parameters must be set).

All input edges (D_{x},N_{y}) with an output edge (N_{z},D_{x}) are unmarked.

All input edges (D_{x},N_{y}) where D_{x} is a Delay data object are marked.
Processing a node results in unmarking all the corresponding input edges and marking all its output edges; marking an output edge (N_{x},D_{y}) where D_{y} is not a Delay results in marking all of the input edges (D_{y},N_{z}). Following these rules, it is possible to statically schedule the nodes in a graph as follows: Construct a precedence graph P, including all the nodes N_{1} … N_{x}, and an edge (N_{x},N_{z}) for every pair of edges (N_{x},D_{y}) and (D_{y},N_{z}) where D_{y} is not a Delay. Then unconditionally fire each node according to any topological sort of P.
The following assertions should be verified:

P is a Directed Acyclic Graph (DAG), implied by 4 and the way it is constructed.

Every data object has a value when it is executed, implied by 5, 6, 7, and the marking.

Execution is deterministic if the nodes are deterministic, implied by 3, 4, and the marking.

Every node completes its execution exactly once.
The execution model described here just acts as a formalism. For example, independent processing is allowed across multiple depended and depending nodes and edges, provided that the result is invariant with the execution model described here.
Contained & Overlapping Data Objects
There are cases in which two different data objects referenced by an output
parameter of node N_{1} and input parameter of node N_{2} in a
graph induce a dependency between these two nodes: For example, a pyramid
and its level images, image and the subimages created from it by
vxCreateImageFromROI
or vxCreateImageFromChannel
, or overlapping
subimages of the same image
or objects created from externally allocated buffers with overlap.
If a graph uses objects created from externally allocated buffers with overlap,
the behavior of graph verification and/or graph execution is implementation dependent.
Following figure show examples of this dependency.
To simplify subsequent definitions and requirements a limitation is imposed
that if a subimage I^{'} has been created from image I and
subimage I^{''} has been created from I^{'}, then I^{''} is
still considered a subimage of I and not of I^{'}.
In these cases it is expected that although the two nodes reference two
different data objects, any change to one data object might be reflected in
the other one.
Therefore it implies that N_{1} comes before N_{2} in the graph’s
topological order.
To ensure that, following definitions are introduced.

Containment Set  C(d), the set of recursively contained data objects of d, named Containment Set, is defined as follows:

C_{0}(d) = {d}

C_{1}(d) is the set of all data objects that are directly contained by d:

If d is an image, all images created from an ROI or channel of d are directly contained by d.

If d is a pyramid, all pyramid levels of d are directly contained by d.

If d is an object array, all elements of d are directly contained by d.

If d is a delay object, all slots of d are directly contained by d.


For i > 1, C_{i}(d) is the set of all data objects that are contained by d at the i^{th} order

\(C_i(d)=\bigcup_{d'\in{C_{i1}(d)}}C_1(d')\)


C(d) is the set that contains d itself, the data objects contained by d, the data objects that are contained by the data objects contained by d and so on. Formally:

\(C(d)=\bigcup_{i=0}^{\infty}C_i(d)\)



I(d) is a predicate that equals true if and only if d is an image.

Overlapping Relationship  The overlapping relation R_{ov} is a relation defined for images, such that if i_{1} and i_{2} in C(i), i being an image, then i_{1} R_{ov} i_{2} is true if and only if i_{1} and i_{2} overlap, i.e there exists a point (x,y) of i that is contained in both i_{1} and i_{2}. Note that this relation is reflexive and symmetric, but not transitive: i_{1} overlaps i_{2} and i_{2} overlaps i_{3} does not necessarily imply that i_{1} overlaps i_{3}, as illustrated in the following figure:
Figure 4. Overlap Example 
Dependency Relationship  The dependency relationship N_{1} → N_{2}, is a relation defined for nodes. N_{1} → N_{2} means that N_{2} depends on N_{1} and then implies that N_{2} must be executed after the completion of N_{1}.

N_{1} → N_{2} if N_{1} writes to a data object d_{1} and N_{2} reads from a data object d_{2} and:

d_{1} ∈ C(d_{2}) or d_{2} ∈ C(d_{1}) or (I(d_{1}) and I(d_{2}) and d_{1} R_{ov} d_{2})

If data object D_{y} of an output edge (N_{x},D_{y}) overlaps with a data object D_{z} then the result is implementation defined.
2.8.7. Node Execution Independence
In the following example a client computes the gradient magnitude and
gradient phase from a blurred input image.
The vxMagnitudeNode
and vxPhaseNode
are independently
computed, in that each does not depend on the output of the other.
OpenVX does not mandate that they are run simultaneously or in parallel, but
it could be implemented this way by the OpenVX vendor.
The code to construct such a graph can be seen below.
vx_context context = vxCreateContext();
vx_image images[] = {
vxCreateImage(context, 640, 480, VX_DF_IMAGE_UYVY),
vxCreateImage(context, 640, 480, VX_DF_IMAGE_S16),
vxCreateImage(context, 640, 480, VX_DF_IMAGE_U8),
};
vx_graph graph = vxCreateGraph(context);
vx_image virts[] = {
vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_VIRT),
vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_VIRT),
vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_VIRT),
vxCreateVirtualImage(graph, 0, 0, VX_DF_IMAGE_VIRT),
};
vxChannelExtractNode(graph, images[0], VX_CHANNEL_Y, virts[0]),
vxGaussian3x3Node(graph, virts[0], virts[1]),
vxSobel3x3Node(graph, virts[1], virts[2], virts[3]),
vxMagnitudeNode(graph, virts[2], virts[3], images[1]),
vxPhaseNode(graph, virts[2], virts[3], images[2]),
status = vxVerifyGraph(graph);
if (status == VX_SUCCESS)
{
status = vxProcessGraph(graph);
}
vxReleaseContext(&context); /* this will release everything */
2.8.8. Verification
Graphs within OpenVX must go through a rigorous validation process before execution to satisfy the design concept of eliminating runtime overhead (parameter checking) that guarantees safe execution of the graph. OpenVX must check for (but is not limited to) these conditions:
Parameters To Nodes:

Each required parameter is given to the node (vx_parameter_state_e). Optional parameters may not be present and therefore are not checked when absent. If present, they are checked.

Each parameter given to a node must be of the right direction (a value from
vx_direction_e
). 
Each parameter given to a node must be of the right object type (from the object range of
vx_type_e
). 
Each parameter attribute or value must be verified. In the case of a scalar value, it may need to be range checked (e.g., 0.5 ≤ k ≤ 1.0). The implementation is not required to do runtime range checking of scalar values. If the value of the scalar changes at run time to go outside the range, the results are undefined. The rationale is that the potential performance hit for runtime range checking is too large to be enforced. It will still be checked at graph verification time as a timezero sanity check. If the scalar is an output parameter of another node, it must be initialized to a legal value. In the case of
vxScaleImageNode
, the relation of the input image dimensions to the output image dimensions determines the scaling factor. These values or attributes of data objects must be checked for compatibility on each platform. 
Graph Connectivity  the
vx_graph
must be a Directed Acyclic Graph (DAG). No cycles or feedback is allowed. Thevx_delay
object has been designed to explicitly address feedback between Graph executions. 
Resolution of Virtual Data Objects  Any changes to Virtual data objects from unspecified to specific format or dimensions, as well as the related creation of objects of specific type that are observable at processing time, takes place at Verification time.
The implementation must check that all node parameters are the correct type
at node creation time, unless the parameter value is set to NULL
.
Additional checks may also be made on nonNULL
parameters.
The user must be allowed to set parameters to NULL
at node creation time,
even if they are required parameters, in order to create “exemplar” nodes
that are not used in graph execution, or to create nodes incrementally.
Therefore the implementation must not generate an error at node creation
time for parameters that are explicitly set to NULL
.
However, the implementation must check that all required parameters are
nonNULL
and the correct type during vxVerifyGraph
.
Other more complex checks may also be done during vxVerifyGraph
.
The implementation should provide specific error reporting of NULL
parameters during vxVerifyGraph
, e.g., “Parameter<parameter> of
Node<node> is NULL
.”
2.9. Callbacks
Callbacks are a method to control graph flow and to make decisions based on
completed work.
The vxAssignNodeCallback
call takes as a parameter a callback
function.
This function will be called after the execution of the particular node, but
prior to the completion of the graph.
If nodes are arranged into independent sets, the order of the callbacks is
unspecified.
Nodes that are arranged in a serial fashion due to data dependencies perform
callbacks in order.
The callback function may use the node reference first to extract parameters
from the node, and then extract the data references.
Data outputs of Nodes with callbacks shall be available (via Map/Unmap/Copy
methods) when the callback is called.
2.10. User Kernels
OpenVX supports the concept of clientdefined functions that shall be executed as Nodes from inside the Graph or are Graph internal. The purpose of this paradigm is to:

Further exploit independent operation of nodes within the OpenVX platform.

Allow componentized functions to be reused elsewhere in OpenVX.

Formalize strict verification requirements (i.e., Contract Programming).
In this example, to execute clientsupplied functions, the graph does not have to be halted and then resumed. These nodes shall be executed in an independent fashion with respect to independent base nodes within OpenVX. This allows implementations to further minimize execution time if hardware to exploit this property exists.
2.10.1. Parameter Validation
User Kernels must aid in the Graph Verification effort by providing an explicit validation function for each vision function they implement. Each parameter passed to the instanced Node of a User Kernel is validated using the clientsupplied validation function. The client must check these attributes and/or values of each parameter:

Each attribute or value of the parameter must be checked. For example, the size of array, or the value of a scalar to be within a range, or a dimensionality constraint of an image such as width divisibility. (Some implementations may have restrictions, such as an image width be evenly divisible by some fixed number).

If the output parameters depend on attributes or values from input parameters, those relationships must be checked.
The Meta Format Object
The Meta Format Object is an opaque object used to collect requirements about the output parameter, which then the OpenVX implementation will check. The Client must manually set relevant object attributes to be checked against output parameters, such as dimensionality, format, scaling, etc.
2.10.2. User Kernels Naming Conventions
User Kernels must be exported with a unique name (see Naming Conventions for information on OpenVX conventions) and a unique enumeration. Clients of OpenVX may use either the name or enumeration to retrieve a kernel, so collisions due to nonunique names will cause problems. The kernel enumerations may be extended by following this example:
#define VX_KERNEL_NAME_KHR_XYZ "org.khronos.example.xyz"
/*! \brief The XYZ Example Library Set
* \ingroup group_xyz_ext
*/
#define VX_LIBRARY_XYZ (0x3) // assigned from Khronos, vendors control their own
/*! \brief The list of XYZ Kernels.
* \ingroup group_xyz_ext
*/
enum vx_kernel_xyz_ext_e {
/*! \brief The Example User Defined Kernel */
VX_KERNEL_KHR_XYZ = VX_KERNEL_BASE(VX_ID_DEFAULT, VX_LIBRARY_XYZ) + 0x0,
// up to 0xFFF kernel enums can be created.
};
Each vendor of a vision function or an implementation must apply to Khronos
to get a unique identifier (up to a limit of 2^{12}  1 vendors).
Until they obtain a unique ID vendors must use VX_ID_DEFAULT
.
To construct a kernel enumeration, a vendor must have both their ID and a
library ID.
The library ID’s are completely vendor defined (however when using the
VX_ID_DEFAULT
ID, many libraries may collide in namespace).
Once both are defined, a kernel enumeration may be constructed using the
VX_KERNEL_BASE
macro and an offset.
(The offset is optional, but very helpful for long enumerations.)
2.11. Immediate Mode Functions
OpenVX also contains an interface defined within <VX/vxu.h>
that allows
for immediate execution of vision functions.
These interfaces are prefixed with vxu
to distinguish them from the Node
interfaces, which are of the form vx<Name>Node
.
Each of these interfaces replicates a Node interface with some exceptions.
Immediate mode functions are defined to behave as Single Node Graphs,
which have no leaking sideeffects (e.g., no Log entries) within the Graph
Framework after the function returns.
The following tables refer to both the Immediate Mode and Graph Mode vision
functions.
The Module documentation for each vision function draws a distinction on
each API by noting that it is either an immediate mode function with the tag
[Immediate]
or it is a Graph mode function by the tag [Graph]
.
2.12. Targets
A 'Target' specifies a physical or logical devices where a node or an
immediate mode function is executed.
This allows the use of different implementations of vision functions on
different targets.
The existence of allowed Targets is exposed to the applications by the use
of defined APIs.
The choice of a Target allows for different levels of control on where the
nodes can be executed.
An OpenVX implementation must support at least one target.
Additional supported targets are specified using the appropriate
enumerations.
See vxSetNodeTarget
, vxSetImmediateModeTarget
, and
vx_target_e
.
An OpenVX implementation must support at least one target
VX_TARGET_ANY
as well as VX_TARGET_STRING
enumerates.
An OpenVX implementation may also support more than these two to indicate
the use of specific devices.
For example, an implementation may add VX_TARGET_CPU
and
VX_TARGET_GPU
enumerates to indicate the support of two possible
targets to assign a nodes to (or to excute an immediate mode function).
Another way an implementation can indicate the existence of multiple
targets, for example CPU and GPU, is by specifying the target as
VX_TARGET_STRING
and using strings 'CPU' and 'GPU'.
Thus defining targets using names rather than enumerates.
The specific naming of string or enumerates is not enforced by the
specification and it is up to the vendors to document and communicate the
Target naming.
Once available in a given implementation Applications can assign a Target to
a node to specify the target that must execute that node by using the API
vxSetNodeTarget
.
For immediate mode functions the target specifies the physical or logical
device where the future execution of that function will be attempted.
When an immediate mode function is not supported on the selected target the
execution falls back to VX_TARGET_ANY
.
2.13. Base Vision Functions
OpenVX comes with a standard or base set of vision functions. The following table lists the supported set of vision functions, their input types (first table) and output types (second table), and the version of OpenVX in which they are supported.
2.13.1. Inputs
Vision Function  S8  U8  U16  S16  U32  F32  color  other 

AbsDiff 
1.0 
1.0.1 

Accumulate 
1.0 

AccumulateSquared 
1.0 

AccumulateWeighted 
1.0 

Add 
1.0 
1.0 

And 
1.0 

BilateralFilter 
1.2 
1.2 

Box3x3 
1.0 

CannyEdgeDetector 
1.0 

ChannelCombine 
1.0 

ChannelExtract 
1.0 

ColorConvert 
1.0 

ConvertDepth 
1.0 
1.0 

Convolve 
1.0 

Data Object Copy 
1.2 

Dilate3x3 
1.0 

EqualizeHistogram 
1.0 

Erode3x3 
1.0 

FastCorners 
1.0 

Gaussian3x3 
1.0 

GaussianPyramid 
1.1 

HarrisCorners 
1.0 

HalfScaleGaussian 
1.0 

Histogram 
1.0 

HOGCells 
1.2 

HOGFeatures 
1.2 

HoughLinesP 
1.2 

IntegralImage 
1.0 

LaplacianPyramid 
1.1 

LaplacianReconstruct 
1.1 

LBP 
1.2 

Magnitude 
1.0 

MatchTemplate 
1.2 

MeanStdDev 
1.0 

Median3x3 
1.0 

Max 
1.2 
1.2 

Min 
1.2 
1.2 

MinMaxLoc 
1.0 
1.0 

Multiply 
1.0 
1.0 

NonLinearFilter 
1.1 

NonMaximaSuppression 
1.2 
1.2 

Not 
1.0 

OpticalFlowPyrLK 
1.0 

Or 
1.0 

Phase 
1.0 

GaussianPyramid 
1.0 

Remap 
1.0 

ScaleImage 
1.0 

Sobel3x3 
1.0 

Subtract 
1.0 
1.0 

TableLookup 
1.0 
1.1 

TensorMultiply 
1.2 
1.2 
1.2 

TensorAdd 
1.2 
1.2 
1.2 

TensorSubtract 
1.2 
1.2 
1.2 

TensorMatrixMultiply 
1.2 
1.2 
1.2 

TensorTableLookup 
1.2 
1.2 
1.2 

TensorTranspose 
1.2 
1.2 
1.2 

Threshold 
1.0 
1.1 

WarpAffine 
1.0 

WarpPerspective 
1.0 

Xor 
1.0 
2.13.2. Outputs
Vision Function  S8  U8  U16  S16  U32  F32  color  other 

AbsDiff 
1.0 
1.0.1 

Accumulate 
1.0 

AccumulateSquared 
1.0 

AccumulateWeighted 
1.0 

Add 
1.0 
1.0 

And 
1.0 

BilateralFilter 
1.2 
1.2 

Box3x3 
1.0 

CannyEdgeDetector 
1.0 

ChannelCombine 
1.0 

ChannelExtract 
1.0 

ColorConvert 
1.0 

ConvertDepth 
1.0 
1.0 

Convolve 
1.0 
1.0 

Data Object Copy 
1.2 

Dilate3x3 
1.0 

EqualizeHistogram 
1.0 

Erode3x3 
1.0 

FastCorners 
1.0 

Gaussian3x3 
1.0 

GaussianPyramid 
1.1 

HarrisCorners 
1.0 

HalfScaleGaussian 
1.0 

Histogram 
1.0 

HOGCells 
1.2 
1.2 

HOGFeatures 
1.2 
1.2 

HoughLinesP 
1.2 

IntegralImage 
1.0 

LaplacianPyramid 
1.1 

LaplacianReconstruct 
1.1 

LBP 
1.2 

Magnitude 
1.0 

MatchTemplate 
1.2 

MeanStdDev 
1.0 

Median3x3 
1.0 

Max 
1.2 
1.2 

Min 
1.2 
1.2 

MinMaxLoc 
1.0 
1.0 
1.0 

Multiply 
1.0 
1.0 

NonLinearFilter 
1.1 

NonMaximaSuppression 
1.2 
1.2 

Not 
1.0 

OpticalFlowPyrLK 

Or 
1.0 

Phase 
1.0 

GaussianPyramid 
1.0 

Remap 
1.0 

ScaleImage 
1.0 

Sobel3x3 
1.0 

Subtract 
1.0 
1.0 

TableLookup 
1.0 
1.1 

TensorMultiply 
1.2 
1.2 
1.2 

TensorAdd 
1.2 
1.2 
1.2 

TensorSubtract 
1.2 
1.2 
1.2 

TensorMatrixMultiply 
1.2 
1.2 
1.2 

TensorTableLookup 
1.2 
1.2 
1.2 

TensorTranspose 
1.2 
1.2 
1.2 

Threshold 
1.0 

WarpAffine 
1.0 

WarpPerspective 
1.0 

Xor 
1.0 
2.13.3. Parameter ordering convention
For vision functions, the input and output parameter ordering convention is:

Mandatory inputs

Optional inputs

Mandatory in/outs

Optional in/outs

Mandatory outputs

Optional outputs
The known exceptions are:
2.14. Lifecycles
2.14.1. OpenVX Context Lifecycle
The lifecycle of the context is very simple.
2.14.2. Graph Lifecycle
OpenVX has four main phases of graph lifecycle:

Construction  Graphs are created via
vxCreateGraph
, and Nodes are connected together by data objects. 
Verification  The graphs are checked for consistency, correctness, and other conditions. Memory allocation may occur.

Execution  The graphs are executed via
vxProcessGraph
orvxScheduleGraph
. Between executions data may be updated by the client or some other external mechanism. The client of OpenVX may change reference of input data to a graph, but this may require the graph to be validated again by checkingvxIsGraphVerified
. 
Deconstruction  Graphs are released via
vxReleaseGraph
. All Nodes in the Graph are released.
2.14.3. Data Object Lifecycle
All objects in OpenVX follow a similar lifecycle model. All objects are

Created via
vxCreate<Object><Method>
or retrieved viavxGet<Object><Method>
from the parent object if they are internally created. 
Used within Graphs or immediate functions as needed.

Then objects must be released via
vxRelease<Object>
or viavxReleaseContext
when all objects are released.
OpenVX Image Lifecycle
This is an example of the Image Lifecycle using the OpenVX Framework API. This would also apply to other data types with changes to the types and function names.
2.15. Host Memory Data Object Access Patterns
For objects retrieved from OpenVX that are 2D in nature, such as
vx_image
, vx_matrix
, and vx_convolution
, the manner in
which the hostside has access to these memory regions is welldefined.
OpenVX uses a rowmajor storage (that is each unit in a column is
memoryadjacent to its row adjacent unit).
Twodimensional objects are always created (using vxCreateImage
or
vxCreateMatrix
) in width (columns) by height (rows) notation, with the
arguments in that order.
When accessing these structures in “C” with twodimensional arrays of
declared size, the user must therefore provide the array dimensions in the
reverse of the order of the arguments to the Create function.
This layout ensures rowwise storage in C on the host.
A pointer could also be allocated for the matrix data and would have to be
indexed in this rowmajor method.
2.15.1. Matrix Access Example
const vx_size columns = 3;
const vx_size rows = 4;
vx_matrix matrix = vxCreateMatrix(context, VX_TYPE_FLOAT32, columns, rows);
vx_status status = vxGetStatus((vx_reference)matrix);
if (status == VX_SUCCESS)
{
vx_int32 j, i;
#if defined(OPENVX_USE_C99)
vx_float32 mat[rows][columns]; /* note: row major */
#else
vx_float32 *mat = (vx_float32 *)malloc(rows*columns*sizeof(vx_float32));
#endif
if (vxCopyMatrix(matrix, mat, VX_READ_ONLY, VX_MEMORY_TYPE_HOST) == VX_SUCCESS) {
for (j = 0; j < (vx_int32)rows; j++)
for (i = 0; i < (vx_int32)columns; i++)
#if defined(OPENVX_USE_C99)
mat[j][i] = (vx_float32)rand()/(vx_float32)RAND_MAX;
#else
mat[j*columns + i] = (vx_float32)rand()/(vx_float32)RAND_MAX;
#endif
vxCopyMatrix(matrix, mat, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
}
#if !defined(OPENVX_USE_C99)
free(mat);
#endif
}
2.15.2. Image Access Example
Images and Array differ slightly in how they are accessed due to more complex memory layout requirements.
vx_status status = VX_SUCCESS;
void *base_ptr = NULL;
vx_uint32 width = 640, height = 480, plane = 0;
vx_image image = vxCreateImage(context, width, height, VX_DF_IMAGE_U8);
vx_rectangle_t rect;
vx_imagepatch_addressing_t addr;
vx_map_id map_id;
rect.start_x = rect.start_y = 0;
rect.end_x = rect.end_y = PATCH_DIM;
status = vxMapImagePatch(image, &rect, plane, &map_id,
&addr, &base_ptr,
VX_READ_AND_WRITE, VX_MEMORY_TYPE_HOST, 0);
if (status == VX_SUCCESS)
{
vx_uint32 x,y,i,j;
vx_uint8 pixel = 0;
/* a couple addressing options */
/* use linear addressing function/macro */
for (i = 0; i < addr.dim_x*addr.dim_y; i++) {
vx_uint8 *ptr2 = vxFormatImagePatchAddress1d(base_ptr,
i, &addr);
*ptr2 = pixel;
}
/* 2d addressing option */
for (y = 0; y < addr.dim_y; y+=addr.step_y) {
for (x = 0; x < addr.dim_x; x+=addr.step_x) {
vx_uint8 *ptr2 = vxFormatImagePatchAddress2d(base_ptr,
x, y, &addr);
*ptr2 = pixel;
}
}
/* direct addressing by client
* for subsampled planes, scale will change
*/
for (y = 0; y < addr.dim_y; y+=addr.step_y) {
for (x = 0; x < addr.dim_x; x+=addr.step_x) {
vx_uint8 *tmp = (vx_uint8 *)base_ptr;
i = ((addr.stride_y*y*addr.scale_y) /
VX_SCALE_UNITY) +
((addr.stride_x*x*addr.scale_x) /
VX_SCALE_UNITY);
tmp[i] = pixel;
}
}
/* more efficient direct addressing by client.
* for subsampled planes, scale will change.
*/
for (y = 0; y < addr.dim_y; y+=addr.step_y) {
j = (addr.stride_y*y*addr.scale_y)/VX_SCALE_UNITY;
for (x = 0; x < addr.dim_x; x+=addr.step_x) {
vx_uint8 *tmp = (vx_uint8 *)base_ptr;
i = j + (addr.stride_x*x*addr.scale_x) /
VX_SCALE_UNITY;
tmp[i] = pixel;
}
}
/* this commits the data back to the image.
*/
status = vxUnmapImagePatch(image, map_id);
}
vxReleaseImage(&image);
2.15.3. Array Access Example
Arrays only require a single value, the stride, instead of the entire addressing structure that images need.
vx_size i, stride = sizeof(vx_size);
void *base = NULL;
vx_map_id map_id;
/* access entire array at once */
vxMapArrayRange(array, 0, num_items, &map_id, &stride, &base, VX_READ_AND_WRITE, VX_MEMORY_TYPE_HOST, 0);
for (i = 0; i < num_items; i++)
{
vxArrayItem(mystruct, base, i, stride).some_uint += i;
vxArrayItem(mystruct, base, i, stride).some_double = 3.14f;
}
vxUnmapArrayRange(array, map_id);
Map/Unmap pairs can also be called on individual elements of array using a method similar to this:
/* access each array item individually */
for (i = 0; i < num_items; i++)
{
mystruct *myptr = NULL;
vxMapArrayRange(array, i, i+1, &map_id, &stride, (void **)&myptr, VX_READ_AND_WRITE, VX_MEMORY_TYPE_HOST, 0);
myptr>some_uint += 1;
myptr>some_double = 3.14f;
vxUnmapArrayRange(array, map_id);
}
2.16. Concurrent Data Object Access
Accessing OpenVX dataobjects using the functions Map, Copy, Read
concurrently to an execution of a graph that is accessing the same data
objects is permitted only if all accesses are readonly.
That is, for Map, Copy to have a readonly access mode and for nodes in the
graph to have that dataobject as an input parameter only.
In all other cases, including write or readwrite modes and Write access
function, as well as a graph nodes having the dataobject as output or
bidirectional, the application must guarantee that the access is not
performed concurrently with the graph execution.
That can be achieved by calling unmap following a map before calling
vxScheduleGraph
or vxProcessGraph
.
In addition, the application must call vxWaitGraph
after
vxScheduleGraph
before calling Map, Read, Write or Copy to avoid
restricted concurrent access.
An application that fails to follow the above might encounter an undefined
behavior and/or data loss without being notified by the OpenVX framework.
Accessing images created from ROI (vxCreateImageFromROI
) or created
from a channel (vxCreateImageFromChannel
) must be treated as if the
entire image is being accessed.

Setting an attribute is considered as writing to a data object in this respect.

For concurrent execution of several graphs please see Execution Model

Also see the graph formalism section for guidance on accessing ROIs of the same image within a graph.
2.17. Valid Image Region
The valid region mechanism informs the application as to which pixels of the output images of a graph’s execution have valid values (see valid pixel definition below). The mechanism also applies to immediate mode (VXU) calls, and supports the communication of the valid region between different graph executions. Some vision functions, mainly those providing statistics and summarization of image information, use the valid region to ignore pixels that are not valid on their inputs (potentially bad or unstable pixel values). A good example of such a function is Min/Max Location. Formalization of the valid region mechanism is given below.

Valid Pixels  All output pixels of an OpenVX function are considered valid by default, unless their calculation depends on input pixels that are not valid. An input pixel is not valid in one of two situations:

The pixel is outside of the image border and the border mode in use is
VX_BORDER_UNDEFINED

The pixel is outside the valid region of the input image.


Valid Region  The region in the image that contains all the valid pixels. Theoretically this can be of any shape. OpenVX currently only supports rectangular valid regions. In subsequent text the term 'valid rectangle' denotes a valid region that is rectangular in shape.

Valid Rectangle Reset  In some cases it is not possible to calculate a valid rectangle for the output image of a vision function (for example, warps and remap). In such cases, the vision function is said to reset the valid Region to the entire image. The attribute
VX_NODE_VALID_RECT_RESET
is a read only attribute and is used to communicate valid rectangle reset behavior to the application. When it is set tovx_true_e
for a given node the valid rectangle of the output images will reset to the full image upon execution of the node, when it is set tovx_false_e
the valid rectangle will be calculated. All standard OpenVX functions will have this attribute set tovx_false_e
by default, except for Warp and Remap where it will be set tovx_true_e
. 
Valid Rectangle Initialization  Upon the creation of an image, its valid rectangle is the entire image. One exception to this is when creating an image via
vxCreateImageFromROI
; in that case, the valid region of the ROI image is the subset of the valid region of the parent image that is within the ROI. In other words, the valid region of an image created using an ROI is the largest rectangle that contains valid pixels in the parent image. 
Valid Rectangle Calculation  The valid rectangle of an image changes as part of the graph execution, the correct value is guaranteed only when the execution finishes. The valid rectangle of an image remains unchanged between graph executions and persists between graph executions as long as the application doesn’t explicitly change the valid region via
vxSetImageValidRectangle
. Notice that usingvxMapImagePatch
,vxUnmapImagePatch
orvxSwapImageHandle
does not change the valid region of an image. If a nonUNDEFINED border mode is used on an image where the valid region is not the full image, the results at the border and resulting size of the valid region are implementationdependent. This case can occur when mixing UNDEFINED and other border mode, which is not recommended. 
Valid Rectangle for Immediate mode (VXU)  VXU is considered a single node graph execution, thus the valid rectangle of an output of VXU will be propagated for an input to a consequent VXU call (when using the same output image from one call as input to the consecutive call).

Valid Region Usage  For all standard OpenVX functions, the framework must guarantee that all pixel values inside the valid rectangle of the output images are valid. The framework does not guarantee that input pixels outside of the valid rectangle are processed. For the following vision functions, the framework guarantees that pixels outside of the valid rectangle do not participate in calculating the vision function result: Equalize Histogram, Integral Image, Fast Corners, Histogram, Mean and Standard Deviation, Min Max Location, Optical Flow Pyramid (LK) and Canny Edge Detector. An application can get the valid rectangle of an image by using
vxGetValidRegionImage
. 
User kernels  User kernels may change the valid rectangles of their output images. To change the valid rectangle, the programmer of the user kernel must provide a callback function that sets the valid rectangle. The output validator of the user kernel must provide this callback by setting the value of the
vx_meta_format
attributeVX_VALID_RECT_CALLBACK
during the output validator. The callback function must be callable by the OpenVX framework during graph validation and execution. Assumptions must not be made regarding the order and the frequency by which the valid rectangle callback is called. The framework will recalculate the valid region when a change in the input valid regions is detected. For user nodes, the default value ofVX_NODE_VALID_RECT_RESET
isvx_true_e
. SettingVX_VALID_RECT_CALLBACK
during parameter validation to a value other thanNULL
will result in settingVX_NODE_VALID_RECT_RESET
tovx_false_e
. Note: the above means that whenVX_VALID_RECT_CALLBACK
is not set or set toNULL
the usernode will reset the valid rectangle to the entire image. 
In addition, valid rectangle reset occurs in the following scenarios:

A reset of the valid rectangle of a parent image when a node writes to one of its ROIs. The only case where the reset does not occur is when the child ROI image is identical to the parent image.

For nodes that have the
VX_NODE_VALID_RECT_RESET
set tovx_true_e

2.18. Extending OpenVX
Beyond User Kernels there are other mechanisms for vendors to extend features in OpenVX. These mechanisms are not available to User Kernels. Each OpenVX official extension has a unique identifier, comprised of capital letters, numbers and the underscore character, prefixed with “KHR_”, for example “KHR_NEW_FEATURE”.
2.18.1. Extending Attributes
When extending attributes, vendors must use their assigned ID from
vx_vendor_id_e
in conjunction with the appropriate macros for creating
new attributes with VX_ATTRIBUTE_BASE
.
The typical mechanism to extend a new attribute for some object type (for
example a vx_node
attribute from VX_ID_TI
) would look like this:
enum {
VX_NODE_TI_NEWTHING = VX_ATTRIBUTE_BASE(VX_ID_TI, VX_TYPE_NODE) + 0x0,
};
2.18.2. Vendor Custom Kernels
Vendors wanting to add more kernels to the base set supplied to OpenVX should provide a header of the form
#include <VX/vx_ext_<vendor>.h>
that contains definitions of each of the following.

New Node Creation Function Prototype per function.
/*! \brief [Graph] This is an example ISV or OEM provided node which executes * in the Graph to call the XYZ kernel. * \param [in] graph The handle to the graph in which to instantiate the node. * \param [in] input The input image. * \param [in] value The input scalar value * \param [out] output The output image. * \param [in,out] temp A temp array for some data which is needed for * every iteration. * \ingroup group_example_kernel */ vx_node vxXYZNode(vx_graph graph, vx_image input, vx_uint32 value, vx_image output, vx_array temp);

A new Kernel Enumeration(s) and Kernel String per function.
#define VX_KERNEL_NAME_KHR_XYZ "org.khronos.example.xyz" /*! \brief The XYZ Example Library Set * \ingroup group_xyz_ext */ #define VX_LIBRARY_XYZ (0x3) // assigned from Khronos, vendors control their own /*! \brief The list of XYZ Kernels. * \ingroup group_xyz_ext */ enum vx_kernel_xyz_ext_e { /*! \brief The Example User Defined Kernel */ VX_KERNEL_KHR_XYZ = VX_KERNEL_BASE(VX_ID_DEFAULT, VX_LIBRARY_XYZ) + 0x0, // up to 0xFFF kernel enums can be created. };

[Optional] A new VXU Function per function.
/*! \brief [Immediate] This is an example of an immediate mode version of the XYZ node. * \param [in] context The overall context of the implementation. * \param [in] input The input image. * \param [in] value The input scalar value * \param [out] output The output image. * \param [in,out] temp A temp array for some data which is needed for * every iteration. * \ingroup group_example_kernel */ vx_status vxuXYZ(vx_context context, vx_image input, vx_uint32 value, vx_image output, vx_array temp);
This should come with good documentation for each new part of the extension. Ideally, these sorts of extensions should not require linking to new objects to facilitate usage.
2.18.3. Vendor Custom Extensions
Some extensions affect base vision functions and thus may be invisible to
most users.
In these circumstances, the vendor must report the supported extensions to
the base nodes through the VX_CONTEXT_EXTENSIONS
attribute on the
context.
vx_char *tmp, *extensions = NULL;
vx_size size = 0;
vxQueryContext(context,VX_CONTEXT_EXTENSIONS_SIZE,&size,sizeof(size));
extensions = malloc(size);
vxQueryContext(context,VX_CONTEXT_EXTENSIONS,
extensions, size);
Extensions in this list are dependent on the extension itself; they may or may not have a header and new kernels or framework feature or data objects. The common feature is that they are implemented and supported by the implementation vendor.
2.18.4. Hinting
The specification defines a Hinting API that allows Clients to feed information to the implementation for optional behavior changes. See Framework: Hints. It is assumed that most of the hints will be vendor or implementationspecific. Check with the OpenVX implementation vendor for information on vendorspecific extensions.
2.18.5. Directives
The specification defines a Directive API to control implementation behavior. See Framework: Directives. This may allow things like disabling parallelism for debugging, enabling cache writingthrough for some buffers, or any implementationspecific optimization.
3. Vision Functions
These are the base vision functions supported.
These functions were chosen as a subset of a larger pool of possible functions that fall under the following criteria:

Applicable to Acceleration Hardware

Very Common Usage

Encumbrance Free
Modules
3.1. Absolute Difference
Computes the absolute difference between two images. The output image dimensions should be the same as the dimensions of the input images.
Absolute Difference is computed by:

out(x,y) =  in_{1}(x,y)  in_{2}(x,y) 
If one of the input images is of type VX_DF_IMAGE_S16
, all values are
converted to vx_int32
and the overflow policy
VX_CONVERT_POLICY_SATURATE
is used.

out(x,y) = saturate_{int16} (  (int32)in_{1}(x,y)  (int32)in_{2}(x,y)  )
The output image can be VX_DF_IMAGE_U8
only if both source images are
VX_DF_IMAGE_U8
and the output image is explicitly set to
VX_DF_IMAGE_U8
.
It is otherwise VX_DF_IMAGE_S16
.
Functions
3.1.1. Functions
vxAbsDiffNode
[Graph] Creates an AbsDiff node.
vx_node vxAbsDiffNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
graph  The reference to the graph. 
[in]
in1  An input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[in]
in2  An input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[out]
out  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format, which must have the same dimensions as the input image.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuAbsDiff
[Immediate] Computes the absolute difference between two images.
vx_status vxuAbsDiff(
vx_context context,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  An input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[in]
in2  An input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[out]
out  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.2. Accumulate
Accumulates an input image into output image. The accumulation image dimensions should be the same as the dimensions of the input image.
Accumulation is computed by:

accum(x,y) = accum(x,y) + input(x,y)
The overflow policy used is VX_CONVERT_POLICY_SATURATE
.
Functions
3.2.1. Functions
vxAccumulateImageNode
[Graph] Creates an accumulate node.
vx_node vxAccumulateImageNode(
vx_graph graph,
vx_image input,
vx_image accum);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[inout]
accum  The accumulation image inVX_DF_IMAGE_S16
, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuAccumulateImage
[Immediate] Computes an accumulation.
vx_status vxuAccumulateImage(
vx_context context,
vx_image input,
vx_image accum);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[inout]
accum  The accumulation image inVX_DF_IMAGE_S16
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.3. Accumulate Squared
Accumulates a squared value from an input image to an output image. The accumulation image dimensions should be the same as the dimensions of the input image.
Accumulate squares is computed by:

accum(x,y) = saturate_{int16} ( (uint16) accum(x,y) + ( ( (uint16)(input(x,y)^{2})) >> (shift)))
Where 0 ≤ shift ≤ 15
The overflow policy used is VX_CONVERT_POLICY_SATURATE
.
Functions
3.3.1. Functions
vxAccumulateSquareImageNode
[Graph] Creates an accumulate square node.
vx_node vxAccumulateSquareImageNode(
vx_graph graph,
vx_image input,
vx_scalar shift,
vx_image accum);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
shift  The inputVX_TYPE_UINT32
with a value in the range of 0 ≤ shift ≤ 15. 
[inout]
accum  The accumulation image inVX_DF_IMAGE_S16
, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuAccumulateSquareImage
[Immediate] Computes a squared accumulation.
vx_status vxuAccumulateSquareImage(
vx_context context,
vx_image input,
vx_scalar shift,
vx_image accum);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
shift  AVX_TYPE_UINT32
type, the input value with the range 0 ≤ shift ≤ 15. 
[inout]
accum  The accumulation image inVX_DF_IMAGE_S16
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.4. Accumulate Weighted
Accumulates a weighted value from an input image to an output image. The accumulation image dimensions should be the same as the dimensions of the input image.
Weighted accumulation is computed by:

accum(x,y) = (1  α) accum(x,y) + α input(x,y)
Where 0 ≤ α ≤ 1. Conceptually, the rounding for this is defined as:

output(x,y)= uint8( (1  α) float32( int32( output(x,y) ) ) + α float32( int32( input(x,y) ) ) )
Functions
3.4.1. Functions
vxAccumulateWeightedImageNode
[Graph] Creates a weighted accumulate node.
vx_node vxAccumulateWeightedImageNode(
vx_graph graph,
vx_image input,
vx_scalar alpha,
vx_image accum);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
alpha  The inputVX_TYPE_FLOAT32
scalar value with a value in the range of 0.0 ≤ α ≤ 1.0. 
[inout]
accum  TheVX_DF_IMAGE_U8
accumulation image, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuAccumulateWeightedImage
[Immediate] Computes a weighted accumulation.
vx_status vxuAccumulateWeightedImage(
vx_context context,
vx_image input,
vx_scalar alpha,
vx_image accum);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
alpha  AVX_TYPE_FLOAT32
type, the input value with the range 0.0 ≤ α ≤ 1.0. 
[inout]
accum  TheVX_DF_IMAGE_U8
accumulation image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.5. Arithmetic Addition
Performs addition between two images. The output image dimensions should be the same as the dimensions of the input images.
Arithmetic addition is performed between the pixel values in two
VX_DF_IMAGE_U8
or VX_DF_IMAGE_S16
images.
The output image can be VX_DF_IMAGE_U8
only if both source images are
VX_DF_IMAGE_U8
and the output image is explicitly set to
VX_DF_IMAGE_U8
.
It is otherwise VX_DF_IMAGE_S16
.
If one of the input images is of type VX_DF_IMAGE_S16
, all values are
converted to VX_DF_IMAGE_S16
.
The overflow handling is controlled by an overflowpolicy parameter.
For each pixel value in the two input images:

out(x,y) = in_{1}(x,y) + in_{2}(x,y)
Functions
3.5.1. Functions
vxAddNode
[Graph] Creates an arithmetic addition node.
vx_node vxAddNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_enum policy,
vx_image out);
Parameters

[in]
graph  The reference to the graph. 
[in]
in1  An input image,VX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
in2  An input image,VX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
policy  AVX_TYPE_ENUM
of thevx_convert_policy_e
enumeration. 
[out]
out  The output image, aVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
image, which must have the same dimensions as the input images.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuAdd
[Immediate] Performs arithmetic addition on pixel values in the input images.
vx_status vxuAdd(
vx_context context,
vx_image in1,
vx_image in2,
vx_enum policy,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  AVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
input image. 
[in]
in2  AVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
input image. 
[in]
policy  Avx_convert_policy_e
enumeration. 
[out]
out  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.6. Arithmetic Subtraction
Performs subtraction between two images. The output image dimensions should be the same as the dimensions of the input images.
Arithmetic subtraction is performed between the pixel values in two
VX_DF_IMAGE_U8
or two VX_DF_IMAGE_S16
images.
The output image can be VX_DF_IMAGE_U8
only if both source images are
VX_DF_IMAGE_U8
and the output image is explicitly set to
VX_DF_IMAGE_U8
.
It is otherwise VX_DF_IMAGE_S16
.
If one of the input images is of type VX_DF_IMAGE_S16
, all values are
converted to VX_DF_IMAGE_S16
.
The overflow handling is controlled by an overflowpolicy parameter.
For each pixel value in the two input images:

out(x,y) = in_{1}(x,y)  in_{2}(x,y)
Functions
3.6.1. Functions
vxSubtractNode
[Graph] Creates an arithmetic subtraction node.
vx_node vxSubtractNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_enum policy,
vx_image out);
Parameters

[in]
graph  The reference to the graph. 
[in]
in1  An input image,VX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
, the minuend. 
[in]
in2  An input image,VX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
, the subtrahend. 
[in]
policy  AVX_TYPE_ENUM
of thevx_convert_policy_e
enumeration. 
[out]
out  The output image, aVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
image, which must have the same dimensions as the input images.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuSubtract
[Immediate] Performs arithmetic subtraction on pixel values in the input images.
vx_status vxuSubtract(
vx_context context,
vx_image in1,
vx_image in2,
vx_enum policy,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  AVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
input image, the minuend. 
[in]
in2  AVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
input image, the subtrahend. 
[in]
policy  Avx_convert_policy_e
enumeration. 
[out]
out  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.7. Bilateral Filter
The function applies bilateral filtering to the input tensor.
A bilateral filter is a nonlinear, edgepreserving and noisereducing smoothing filter. The input and output are tensors with the same dimensions and data type. The tensor dimensions are divided to spatial and non spatial dimensions. The spatial dimensions are isometric distance which is Cartesian. And they are the last 2. The non spatial dimension is the first, and we call it radiometric. The radiometric value at each spatial position is replaced by a weighted average of radiometric values from nearby pixels. This weight can be based on a Gaussian distribution. Crucially, the weights depend not only on Euclidean distance of spatial dimensions, but also on the radiometric differences (e.g. range differences, such as color intensity, depth distance, etc.). This preserves sharp edges by systematically looping through each pixel and adjusting weights to the adjacent pixels accordingly The equations are as follows:

\(h(x,\tau)=\frac{1}{W_{p}}\sum f(y,t)g_{1}(yx)g_{2}(t\tau)dydt\)

\(g_{1}(y)=\frac{1}{\sqrt{2\pi\sigma_{y}}}\exp\left(\frac{1}{2}\left(\frac{y^{2}}{\sigma_{y}^{2}}\right)\right)\)

\(g_{2}(t)=\frac{1}{\sqrt{2\pi\sigma_{t}}}\exp\left(\frac{1}{2}\left(\frac{t^{2}}{\sigma_{t}^{2}}\right)\right)\)

\(W_{p}=\sum g_{1}(yx)g_{2}(t\tau)dydt\)
where x, y are in the spatial euclidean space.
t, τ are vectors in radiometric space.
Can be color, depth or movement.
Wp is the normalization factor.
In case of 3 dimensions the 1st dimension of the vx_tensor
.
Which can be of size 1 or 2.
Or the value in the tensor in the case of tensor with 2 dimensions.
Functions
3.7.1. Functions
vxBilateralFilterNode
[Graph] The function applies bilateral filtering to the input tensor.
vx_node vxBilateralFilterNode(
vx_graph graph,
vx_tensor src,
vx_int32 diameter,
vx_float32 sigmaSpace,
vx_float32 sigmaValues,
vx_tensor dst);
Parameters

[in]
graph  The reference to the graph. 
[in]
src  The input data, avx_tensor
. maximum 3 dimension and minimum 2. The tensor is of typeVX_TYPE_UINT8
orVX_TYPE_INT16
. Dimensions are [radiometric,width,height] or [width,height].SeevxCreateTensor
andvxCreateVirtualTensor
. 
[in]
diameter  of each pixel neighbourhood that is used during filtering. Values of diameter must be odd. Bigger then 3 and smaller then 10. 
[in]
sigmaValues  Filter sigma in the radiometric space. Supported values are bigger then 0 and smaller or equal 20. 
[in]
sigmaSpace  Filter sigma in the spatial space. Supported values are bigger then 0 and smaller or equal 20. 
[out]
dst  The output data, avx_tensor
of typeVX_TYPE_UINT8
orVX_TYPE_INT16
. Must be the same type and size of the input.
Note
The border modes 
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuBilateralFilter
[Immediate] The function applies bilateral filtering to the input tensor.
vx_status vxuBilateralFilter(
vx_context context,
vx_tensor src,
vx_int32 diameter,
vx_float32 sigmaSpace,
vx_float32 sigmaValues,
vx_tensor dst);
Parameters

[in]
context  The reference to the overall context. 
[in]
src  The input data, avx_tensor
. maximum 3 dimension and minimum 2. The tensor is of typeVX_TYPE_UINT8
orVX_TYPE_INT16
. dimensions are [radiometric,width,height] or [width,height] 
[in]
diameter  of each pixel neighbourhood that is used during filtering. Values of diameter must be odd. Bigger then 3 and smaller then 10. 
[in]
sigmaValues  Filter sigma in the radiometric space. Supported values are bigger then 0 and smaller or equal 20. 
[in]
sigmaSpace  Filter sigma in the spatial space. Supported values are bigger then 0 and smaller or equal 20. 
[out]
dst  The output data, avx_tensor
of typeVX_TYPE_UINT8
orVX_TYPE_INT16
. Must be the same type and size of the input.
Note
The border modes 
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.8. Bitwise AND
Performs a bitwise AND operation between two VX_DF_IMAGE_U8
images.
The output image dimensions should be the same as the dimensions of the
input images.
Bitwise AND is computed by the following, for each bit in each pixel in the input images:

out(x,y) = in_{1}(x,y) ∧ in_{2}(x,y)
Or expressed as C code:
out(x,y) = in_1(x,y) & in_2(x,y)
Functions
3.8.1. Functions
vxAndNode
[Graph] Creates a bitwise AND node.
vx_node vxAndNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
graph  The reference to the graph. 
[in]
in1  AVX_DF_IMAGE_U8
input image. 
[in]
in2  AVX_DF_IMAGE_U8
input image. 
[out]
out  TheVX_DF_IMAGE_U8
output image, which must have the same dimensions as the input images.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuAnd
[Immediate] Computes the bitwise and between two images.
vx_status vxuAnd(
vx_context context,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  AVX_DF_IMAGE_U8
input image 
[in]
in2  AVX_DF_IMAGE_U8
input image 
[out]
out  TheVX_DF_IMAGE_U8
output image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.9. Bitwise EXCLUSIVE OR
Performs a bitwise EXCLUSIVE OR (XOR) operation between two
VX_DF_IMAGE_U8
images.
The output image dimensions should be the same as the dimensions of the
input images.
Bitwise XOR is computed by the following, for each bit in each pixel in the input images:

out(x,y) = in_{1}(x,y) ⊕ in_{2}(x,y)
Or expressed as C code:
out(x,y) = in_1(x,y) ^ in_2(x,y)
Functions
3.9.1. Functions
vxXorNode
[Graph] Creates a bitwise EXCLUSIVE OR node.
vx_node vxXorNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
graph  The reference to the graph. 
[in]
in1  AVX_DF_IMAGE_U8
input image. 
[in]
in2  AVX_DF_IMAGE_U8
input image. 
[out]
out  TheVX_DF_IMAGE_U8
output image, which must have the same dimensions as the input images.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuXor
[Immediate] Computes the bitwise exclusiveor between two images.
vx_status vxuXor(
vx_context context,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  AVX_DF_IMAGE_U8
input image 
[in]
in2  AVX_DF_IMAGE_U8
input image 
[out]
out  TheVX_DF_IMAGE_U8
output image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.10. Bitwise INCLUSIVE OR
Performs a bitwise INCLUSIVE OR operation between two VX_DF_IMAGE_U8
images.
The output image dimensions should be the same as the dimensions of the
input images.
Bitwise INCLUSIVE OR is computed by the following, for each bit in each pixel in the input images:

out(x,y) = in_{1}(x,y) ∨ in_{2}(x,y)
Or expressed as C code:
out(x,y) = in_1(x,y)  in_2(x,y)
Functions
3.10.1. Functions
vxOrNode
[Graph] Creates a bitwise INCLUSIVE OR node.
vx_node vxOrNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
graph  The reference to the graph. 
[in]
in1  AVX_DF_IMAGE_U8
input image. 
[in]
in2  AVX_DF_IMAGE_U8
input image. 
[out]
out  TheVX_DF_IMAGE_U8
output image, which must have the same dimensions as the input images.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuOr
[Immediate] Computes the bitwise inclusiveor between two images.
vx_status vxuOr(
vx_context context,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  AVX_DF_IMAGE_U8
input image 
[in]
in2  AVX_DF_IMAGE_U8
input image 
[out]
out  TheVX_DF_IMAGE_U8
output image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.11. Bitwise NOT
Performs a bitwise NOT operation on a VX_DF_IMAGE_U8
input image.
The output image dimensions should be the same as the dimensions of the
input image.
Bitwise NOT is computed by the following, for each bit in each pixel in the input image:

\(out(x,y) = \overline{in(x,y)}\)
Or expressed as C code:
out(x,y) = ~in_1(x,y)
Functions
3.11.1. Functions
vxNotNode
[Graph] Creates a bitwise NOT node.
vx_node vxNotNode(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  AVX_DF_IMAGE_U8
input image. 
[out]
output  TheVX_DF_IMAGE_U8
output image, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuNot
[Immediate] Computes the bitwise not of an image.
vx_status vxuNot(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  TheVX_DF_IMAGE_U8
input image 
[out]
output  TheVX_DF_IMAGE_U8
output image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.12. Box Filter
Computes a Box filter over a window of the input image. The output image dimensions should be the same as the dimensions of the input image.
This filter uses the following convolution matrix:
Functions
3.12.1. Functions
vxBox3x3Node
[Graph] Creates a Box Filter Node.
vx_node vxBox3x3Node(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuBox3x3
[Immediate] Computes a box filter on the image by a 3x3 window.
vx_status vxuBox3x3(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.13. Canny Edge Detector
Provides a Canny edge detector kernel. The output image dimensions should be the same as the dimensions of the input image.
This function implements an edge detection algorithm similar to that described in [Canny1986]. The main components of the algorithm are:

Gradient magnitude and orientation computation using a noise resistant operator (Sobel).

Nonmaximum suppression of the gradient magnitude, using the gradient orientation information.

Tracing edges in the modified gradient image using hysteresis thresholding to produce a binary result.
The details of each of these steps are described below.
Gradient Computation: Conceptually, the input image is convolved with
vertical and horizontal Sobel kernels of the size indicated by the
gradient_size parameter.
The Sobel kernels used for the gradient computation shall be as shown below.
The two resulting directional gradient images (dx and dy) are
then used to compute a gradient magnitude image and a gradient orientation
image.
The norm used to compute the gradient magnitude is indicated by the
norm_type parameter, so the magnitude may be dx +
dy for VX_NORM_L1
or \(\sqrt{dx^{2} + dy^{2}}\)
for VX_NORM_L2
.
The gradient orientation image is quantized into 4 values: 0, 45, 90, and
135 degrees.

For gradient size 3:
\[\mathbf{sobel}_x = \begin{bmatrix} 1 & 0 & 1 \\ 2 & 0 & 2 \\ 1 & 0 & 1 \end{bmatrix}\]\[\mathbf{sobel}_y = transpose({sobel}_x) = \begin{bmatrix} 1 & 2 & 1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{bmatrix}\] 
For gradient size 5:
\[\mathbf{sobel}_x = \begin{bmatrix} 1 & 2 & 0 & 2 & 1 \\ 4 & 8 & 0 & 8 & 4 \\ 6 & 12 & 0 & 12 & 6 \\ 4 & 8 & 0 & 8 & 4 \\ 1 & 2 & 0 & 2 & 1 \\ \end{bmatrix}\]
sobel_{y} = transpose(sobel_{x})


For gradient size 7:
\[\mathbf{sobel}_x = \begin{bmatrix} 1 & 4 & 5 & 0 & 5 & 4 & 1 \\ 6 & 24 & 30 & 0 & 30 & 24 & 6 \\ 15 & 60 & 75 & 0 & 75 & 60 & 15 \\ 20 & 80 & 100 & 0 & 100 & 80 & 20 \\ 15 & 60 & 75 & 0 & 75 & 60 & 15 \\ 6 & 24 & 30 & 0 & 30 & 24 & 6 \\ 1 & 4 & 5 & 0 & 5 & 4 & 1 \\ \end{bmatrix}\]
sobel_{y} = transpose(sobel_{x})

NonMaximum Suppression: This is then applied such that a pixel is retained as a potential edge pixel if and only if its magnitude is greater than or equal to the pixels in the direction perpendicular to its edge orientation. For example, if the pixel’s orientation is 0 degrees, it is only retained if its gradient magnitude is larger than that of the pixels at 90 and 270 degrees to it. If a pixel is suppressed via this condition, it must not appear as an edge pixel in the final output, i.e., its value must be 0 in the final output.
Edge Tracing: The final edge pixels in the output are identified via a double thresholded hysteresis procedure. All retained pixels with magnitude above the high threshold are marked as known edge pixels (valued 255) in the final output image. All pixels with magnitudes less than or equal to the low threshold must not be marked as edge pixels in the final output. For the pixels in between the thresholds, edges are traced and marked as edges (255) in the output. This can be done by starting at the known edge pixels and moving in all eight directions recursively until the gradient magnitude is less than or equal to the low threshold.
Caveats: The intermediate results described above are conceptual only; so for example, the implementation may not actually construct the gradient images and nonmaximumsuppressed images. Only the final binary (0 or 255 valued) output image must be computed so that it matches the result of a final image constructed as described above.
Enumerations
Functions
3.13.1. Enumerations
vx_norm_type_e
A normalization type.
enum vx_norm_type_e {
VX_NORM_L1 = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_NORM_TYPE) + 0x0,
VX_NORM_L2 = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_NORM_TYPE) + 0x1,
};
See also: Canny Edge Detector
Enumerator
3.13.2. Functions
vxCannyEdgeDetectorNode
[Graph] Creates a Canny Edge Detection Node.
vx_node vxCannyEdgeDetectorNode(
vx_graph graph,
vx_image input,
vx_threshold hyst,
vx_int32 gradient_size,
vx_enum norm_type,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
hyst  The double threshold for hysteresis. TheVX_THRESHOLD_INPUT_FORMAT
shall be eitherVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. TheVX_THRESHOLD_OUTPUT_FORMAT
is ignored. 
[in]
gradient_size  The size of the Sobel filter window, must support at least 3, 5, and 7. 
[in]
norm_type  A flag indicating the norm used to compute the gradient,VX_NORM_L1
orVX_NORM_L2
. 
[out]
output  The output image inVX_DF_IMAGE_U8
format with values either 0 or 255.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuCannyEdgeDetector
[Immediate] Computes Canny Edges on the input image into the output image.
vx_status vxuCannyEdgeDetector(
vx_context context,
vx_image input,
vx_threshold hyst,
vx_int32 gradient_size,
vx_enum norm_type,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
hyst  The double threshold for hysteresis. TheVX_THRESHOLD_INPUT_FORMAT
shall be eitherVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. TheVX_THRESHOLD_OUTPUT_FORMAT
is ignored. 
[in]
gradient_size  The size of the Sobel filter window, must support at least 3, 5 and 7. 
[in]
norm_type  A flag indicating the norm used to compute the gradient,VX_NORM_L1
orVX_NORM_L2
. 
[out]
output  The output image inVX_DF_IMAGE_U8
format with values either 0 or 255.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.14. Channel Combine
Implements the Channel Combine Kernel.
This kernel takes multiple VX_DF_IMAGE_U8
planes to recombine them
into a multiplanar or interleaved format from vx_df_image_e
.
The user must specify only the number of channels that are appropriate for
the combining operation.
If a user specifies more channels than necessary, the operation results in
an error.
For the case where the destination image is a format with subsampling, the
input channels are expected to have been subsampled before combining (by
stretching and resizing).
Functions
3.14.1. Functions
vxChannelCombineNode
[Graph] Creates a channel combine node.
vx_node vxChannelCombineNode(
vx_graph graph,
vx_image plane0,
vx_image plane1,
vx_image plane2,
vx_image plane3,
vx_image output);
Parameters

[in]
graph  The graph reference. 
[in]
plane0  The plane that forms channel 0. Must beVX_DF_IMAGE_U8
. 
[in]
plane1  The plane that forms channel 1. Must beVX_DF_IMAGE_U8
. 
[in]
plane2  [optional] The plane that forms channel 2. Must beVX_DF_IMAGE_U8
. 
[in]
plane3  [optional] The plane that forms channel 3. Must beVX_DF_IMAGE_U8
. 
[out]
output  The output image. The format of the image must be defined, even if the image is virtual. Must have the same dimensions as the input images
See also: VX_KERNEL_CHANNEL_COMBINE
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuChannelCombine
[Immediate] Invokes an immediate Channel Combine.
vx_status vxuChannelCombine(
vx_context context,
vx_image plane0,
vx_image plane1,
vx_image plane2,
vx_image plane3,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
plane0  The plane that forms channel 0. Must beVX_DF_IMAGE_U8
. 
[in]
plane1  The plane that forms channel 1. Must beVX_DF_IMAGE_U8
. 
[in]
plane2  [optional] The plane that forms channel 2. Must beVX_DF_IMAGE_U8
. 
[in]
plane3  [optional] The plane that forms channel 3. Must beVX_DF_IMAGE_U8
. 
[out]
output  The output image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.15. Channel Extract
Implements the Channel Extraction Kernel.
This kernel removes a single VX_DF_IMAGE_U8
channel (plane) from a
multiplanar or interleaved image format from vx_df_image_e
.
Functions
3.15.1. Functions
vxChannelExtractNode
[Graph] Creates a channel extract node.
vx_node vxChannelExtractNode(
vx_graph graph,
vx_image input,
vx_enum channel,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image. Must be one of the definedvx_df_image_e
multichannel formats. 
[in]
channel  Thevx_channel_e
channel to extract. 
[out]
output  The output image. Must beVX_DF_IMAGE_U8
, and must have the same dimensions as the input image.
See also: VX_KERNEL_CHANNEL_EXTRACT
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuChannelExtract
[Immediate] Invokes an immediate Channel Extract.
vx_status vxuChannelExtract(
vx_context context,
vx_image input,
vx_enum channel,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image. Must be one of the definedvx_df_image_e
multichannel formats. 
[in]
channel  Thevx_channel_e
enumeration to extract. 
[out]
output  The output image. Must beVX_DF_IMAGE_U8
.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.16. Color Convert
Implements the Color Conversion Kernel. The output image dimensions should be the same as the dimensions of the input image.
This kernel converts an image of a designated vx_df_image_e
format to
another vx_df_image_e
format for those combinations listed in the
below table, where the columns are output types and the rows are input
types.
The API version first supporting the conversion is also listed.
I/O  RGB  RGBX  NV12  NV21  UYVY  YUYV  IYUV  YUV4 

RGB 
1.0 
1.0 
1.0 
1.0 

RGBX 
1.0 
1.0 
1.0 
1.0 

NV12 
1.0 
1.0 
1.0 
1.0 

NV21 
1.0 
1.0 
1.0 
1.0 

UYVY 
1.0 
1.0 
1.0 
1.0 

YUYV 
1.0 
1.0 
1.0 
1.0 

IYUV 
1.0 
1.0 
1.0 
1.0 

YUV4 
The vx_df_image_e
encoding, held in the VX_IMAGE_FORMAT
attribute, describes the data layout.
The interpretation of the colors is determined by the VX_IMAGE_SPACE
(see vx_color_space_e
) and VX_IMAGE_RANGE
(see
vx_channel_range_e
) attributes of the image.
Implementations are required only to support images of
VX_COLOR_SPACE_BT709
and VX_CHANNEL_RANGE_FULL
.
If the channel range is defined as VX_CHANNEL_RANGE_FULL
, the
conversion between the real number and integer quantizations of color
channels is defined for red, green, blue, and Y as:

value_{real} = value_{integer} / 256.0

value_{integer} = max(0, min(255, floor(value_{real} × 256.0)))
For the U and V channels, the conversion between real number and integer quantizations is:

value_{real} = (value_{integer}  128.0) / 256.0

value_{integer} = max(0, min(255, floor(value_{real} × 256.0 + 128)))
If the channel range is defined as VX_CHANNEL_RANGE_RESTRICTED
, the
conversion between the integer quantizations of color channels and the
continuous representations is defined for red, green, blue, and Y as:

value_{real} = (value_{integer}  16.0) / 219.0

value_{integer} = max(0, min(255, floor(value_{real} × 219.0 + 16.5)))
For the U and V channels, the conversion between real number and integer quantizations is:

value_{real} = (value_{integer}  128.0) / 224.0

value_{integer} = max(0, min(255, floor(value_{real} × 224.0 + 128.5)))
The conversions between nonlinearintensity Y^{'}P_{b}P_{r} and R^{'}G^{'}B^{'} real numbers are:

R^{'} = Y^{'} + 2 (1  K_{r}) P_{r}

B^{'} = Y^{'} + 2 (1  K_{b}) P_{b}

G^{'} = Y^{'}  ( 2(K_{r} (1  K_{r}) P_{r} + K_{b} (1  K_{b}) P_{b}) ) / (1  K_{r}  K_{b})

Y^{'} = (K_{r} R^{'}) + (K_{b} B^{'}) + (1  K_{r}  K_{b})G^{'}

P_{b} = B^{'} / 2  ( (R^{'} K_{r}) + G^{'}(1  K_{r}  K_{b}) ) / (2 (1 K_{b}))

P_{r} = R^{'} / 2  ( (B^{'} K_{b}) + G^{'}(1  K_{r}  K_{b}) ) / (2 (1 K_{r}))
The means of reconstructing P_{b} and P_{r} values from chromadownsampled formats is implementationdefined.

K_{r} = 0.299

K_{b} = 0.114

K_{r} = 0.2126

K_{b} = 0.0722
In all cases, for the purposes of conversion, these colour representations are interpreted as nonlinear in intensity, as defined by the BT.601, BT.709, and sRGB specifications. That is, the encoded colour channels are nonlinear R^{'}, G^{'} and B^{'}, Y^{'}, P_{b}, and P_{r}.
Each channel of the R^{'}G^{'}B^{'} representation can be converted to and from a linearintensity RGB channel by these formulae:
As the different color spaces have different RGB primaries, a conversion between them must transform the color coordinates into the new RGB space. Working with linear RGB values, the conversion formulae are:

R_{BT601_525} = R_{BT601_625} × 1.112302 + G_{BT601_625} × 0.102441 + B_{BT601_625} × 0.009860

G_{BT601_525} = R_{BT601_625} × 0.020497 + G_{BT601_625} × 1.037030 + B_{BT601_625} × 0.016533

B_{BT601_525} = R_{BT601_625} × 0.001704 + G_{BT601_625} × 0.016063 + B_{BT601_625} × 0.982233

R_{BT601_525} = R_{BT709} × 1.065379 + G_{BT709} × 0.055401 + B_{BT709} × 0.009978

G_{BT601_525} = R_{BT709} × 0.019633 + G_{BT709} × 1.036363 + B_{BT709} × 0.016731

B_{BT601_525} = R_{BT709} × 0.001632 + G_{BT709} × 0.004412 + B_{BT709} × 0.993956

R_{BT601_625} = R_{BT601_525} × 0.900657 + G_{BT601_525} × 0.088807 + B_{BT601_525} × 0.010536

G_{BT601_625} = R_{BT601_525} × 0.017772 + G_{BT601_525} × 0.965793 + B_{BT601_525} × 0.016435

B_{BT601_625} = R_{BT601_525} × 0.001853 + G_{BT601_525} × 0.015948 + B_{BT601_525} × 1.017801

R_{BT601_625} = R_{BT709} × 0.957815 + G_{BT709} × 0.042185

G_{BT601_625} = G_{BT709}

B_{BT601_625} = G_{BT709} × 0.011934 + B_{BT709} × 1.011934

R_{BT709} = R_{BT601_525} × 0.939542 + G_{BT601_525} × 0.050181 + B_{BT601_525} × 0.010277

G_{BT709} = R_{BT601_525} × 0.017772 + G_{BT601_525} × 0.965793 + B_{BT601_525} × 0.016435

B_{BT709} = R_{BT601_525} × 0.001622 + G_{BT601_525} × 0.004370 + B_{BT601_525} × 1.005991

R_{BT709} = R_{BT601_625} × 1.044043 + G_{BT601_625} × 0.044043

G_{BT709} = G_{BT601_625}

B_{BT709} = G_{BT601_625} × 0.011793 + B_{BT601_625} × 0.988207
A conversion between one YUV color space and another may therefore consist of the following transformations:

Convert quantized Y^{'}C_{b}C_{r} (“YUV”) to continuous, nonlinear Y^{'}P_{b}P_{r}.

Convert continuous Y^{'}P_{b}P_{r} to continuous, nonlinear R^{'}G^{'}B^{'}.

Convert nonlinear R^{'}G^{'}B^{'} to linearintensity RGB (gammacorrection).

Convert linear RGB from the first color space to linear RGB in the second color space.

Convert linear RGB to nonlinear R^{'}G^{'}B^{'} (gammaconversion).

Convert nonlinear R^{'}G^{'}B^{'} to Y^{'}P_{b}P_{r}.

Convert continuous Y^{'}P_{b}P_{r} to quantized Y^{'}C_{b}C_{r} (“YUV”).
The above formulae and constants are defined in the ITU BT.601 and BT.709 specifications. The formulae for converting between RGB primaries can be derived from the specified primary chromaticity values and the specified white point by solving for the relative intensity of the primaries.
Functions
3.16.1. Functions
vxColorConvertNode
[Graph] Creates a color conversion node.
vx_node vxColorConvertNode(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image from which to convert. 
[out]
output  The output image to which to convert, which must have the same dimensions as the input image.
See also: VX_KERNEL_COLOR_CONVERT
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuColorConvert
[Immediate] Invokes an immediate Color Conversion.
vx_status vxuColorConvert(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image. 
[out]
output  The output image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.17. Control Flow
Defines the predicated execution model of OpenVX.
These features allow for conditional graph flow in OpenVX, via support for a
variety of operations between two scalars.
The supported scalar data types VX_TYPE_BOOL
, VX_TYPE_INT8
,
VX_TYPE_UINT8
, VX_TYPE_INT16
, VX_TYPE_UINT16
,
VX_TYPE_INT32
, VX_TYPE_UINT32
, VX_TYPE_SIZE
,
VX_TYPE_FLOAT32
are supported.
Scalar Operation  Equation  Data Types 

output = (a&b) 
bool = bool op bool 

output = (ab) 
bool = bool op bool 

output = (a^b) 
bool = bool op bool 

output = !(a&b) 
bool = bool op bool 
Scalar Operation  Equation  Data Types 

output = (a == b) 
bool = num op num 

output = (a != b) 
bool = num op num 

output = (a < b) 
bool = num op num 

output = (a ≤ b) 
bool = num op num 

output = (a > b) 
bool = num op num 

output = (a ≥ b) 
bool = num op num 
Scalar Operation  Equation  Data Types 

output = (a+b) 
num = num op num 

output = (ab) 
num = num op num 

output = (a*b) 
num = num op num 

output = (a/b) 
num = num op num 

output = (a%b) 
num = num op num 

output = min(a,b) 
num = num op num 

output = max(a,b) 
num = num op num 
Please note that in the above tables:

bool denotes a scalar of data type
VX_TYPE_BOOL

num denotes supported scalar data types are
VX_TYPE_INT8
,VX_TYPE_UINT8
,VX_TYPE_INT16
,VX_TYPE_UINT16
,VX_TYPE_INT32
,VX_TYPE_UINT32
,VX_SIZE
, andVX_FLOAT32
. 
The
VX_SCALAR_OP_MODULUS
operation supports integer operands. 
The results of
VX_SCALAR_OP_DIVIDE
andVX_SCALAR_OP_MODULUS
operations with the second argument as zero, must be defined by the implementation. 
For arithmetic and comparison operations with mixed input data types, the results will be mathematically accurate without the side effects of internal data representations.

If the operation result can not be stored in output data type without data and/or precision loss, the following rules shall be applied:

If the operation result is integer and output is floatingpoint, the operation result is promoted to floatingpoint.

If the operation result is floatingpoint and output is an integer, the operation result is converted to integer with rounding policy
VX_ROUND_POLICY_TO_ZERO
and conversion policyVX_CONVERT_POLICY_SATURATE
. 
If both operation result and output are integers, the result is converted to output data type with
VX_CONVERT_POLICY_WRAP
conversion policy.

Functions
3.17.1. Functions
vxScalarOperationNode
[Graph] Creates a scalar operation node.
vx_node vxScalarOperationNode(
vx_graph graph,
vx_enum scalar_operation,
vx_scalar a,
vx_scalar b,
vx_scalar output);
Parameters

[in]
graph  The reference to the graph. 
[in]
scalar_operation  AVX_TYPE_ENUM
of thevx_scalar_operation_e
enumeration. 
[in]
a  First scalar operand. 
[in]
b  Second scalar operand. 
[out]
output  Result of the scalar operation.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxSelectNode
[Graph] Selects one of two data objects depending on the the value of a condition (boolean scalar), and copies its data into another data object.
vx_node vxSelectNode(
vx_graph graph,
vx_scalar condition,
vx_reference true_value,
vx_reference false_value,
vx_reference output);
This node supports predicated execution flow within a graph. All the data objects passed to this kernel shall have the same object type and meta data. It is important to note that an implementation may optimize away the select and copy when virtual data objects are used.
If there is a kernel node that contribute only into virtual data objects during the graph execution due to certain data path being eliminated by not taken argument of select node, then the OpenVX implementation guarantees that there will not be any side effects to graph execution and node state.
If the path to a select node contains nonvirtual objects, user nodes, or nodes with completion callbacks, then that path may not be “optimized out” because the callback must be executed and the nonvirtual objects must be modified.
Parameters

[in]
graph  The reference to the graph. 
[in]
condition VX_TYPE_BOOL
predicate variable. 
[in]
true_value  Data object for true. 
[in]
false_value  Data object for false. 
[out]
output  Output data object.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
3.18. Convert Bit Depth
Converts image bit depth. The output image dimensions should be the same as the dimensions of the input image.
This kernel converts an image from some source bitdepth to another bitdepth as described by the table below. If the input value is unsigned the shift must be in zeros. If the input value is signed, the shift used must be an arithmetic shift. The columns in the table below are the output types and the rows are the input types. The API version on which conversion is supported is also listed. (An X denotes an invalid operation.)
I/O  U8  U16  S16  U32  S32 

U8 
X 
1.0 

U16 
X 
X 

S16 
1.0 
X 
X 

U32 
X 
X 

S32 
X 
X 
Conversion Type: The table below identifies the conversion types for the allowed bith depth conversions.
From  To  Conversion Type 

U8 
S16 
Upconversion 
S16 
U8 
Downconversion 
Convert Policy: Downconversions with VX_CONVERT_POLICY_WRAP
follow
this equation:
output(x,y) = ((uint8)(input(x,y) >> shift));
Downconversions with VX_CONVERT_POLICY_SATURATE
follow this equation:
int16 value = input(x,y) >> shift;
value = value < 0 ? 0 : value;
value = value > 255 ? 255 : value;
output(x,y) = (uint8)value;
Upconversions ignore the policy and perform this operation:
output(x,y) = ((int16)input(x,y)) << shift;
The valid values for 'shift' are as specified below, all other values produce undefined behavior.
0 <= shift < 8;
Functions
3.18.1. Functions
vxConvertDepthNode
[Graph] Creates a bitdepth conversion node.
vx_node vxConvertDepthNode(
vx_graph graph,
vx_image input,
vx_image output,
vx_enum policy,
vx_scalar shift);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image. 
[out]
output  The output image with the same dimensions of the input image. 
[in]
policy  AVX_TYPE_ENUM
of thevx_convert_policy_e
enumeration. 
[in]
shift  A scalar containing aVX_TYPE_INT32
of the shift value.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuConvertDepth
[Immediate] Converts the input images bitdepth into the output image.
vx_status vxuConvertDepth(
vx_context context,
vx_image input,
vx_image output,
vx_enum policy,
vx_int32 shift);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image. 
[out]
output  The output image. 
[in]
policy  AVX_TYPE_ENUM
of thevx_convert_policy_e
enumeration. 
[in]
shift  A scalar containing aVX_TYPE_INT32
of the shift value.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
..
3.19. Custom Convolution
Convolves the input with the client supplied convolution matrix. The output image dimensions should be the same as the dimensions of the input image.
The client can supply a vx_int16
typed convolution matrix C_{m,n}.
Outputs will be in the VX_DF_IMAGE_S16
format unless a
VX_DF_IMAGE_U8
image is explicitly provided.
If values would have been out of range of U8 for VX_DF_IMAGE_U8
, the
values are clamped to 0 or 255.
Note
The above equation for this function is different than an equivalent operation suggested by the OpenCV Filter2D function. 
This translates into the C declaration:
// A horizontal Scharr gradient operator with different scale.
vx_int16 gx[3][3] = {
{ 3, 0, 3},
{ 10, 0,10},
{ 3, 0, 3},
};
vx_uint32 scale = 8;
vx_convolution scharr_x = vxCreateConvolution(context, 3, 3);
vxCopyConvolutionCoefficients(scharr_x, (vx_int16*)gx, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
vxSetConvolutionAttribute(scharr_x, VX_CONVOLUTION_SCALE, &scale, sizeof(scale));
For VX_DF_IMAGE_U8
output, an additional step is taken:
For VX_DF_IMAGE_S16
output, the summation is simply set to the output

output(x,y) = sum / scale
The overflow policy used is VX_CONVERT_POLICY_SATURATE
.
Functions
3.19.1. Functions
vxConvolveNode
[Graph] Creates a custom convolution node.
vx_node vxConvolveNode(
vx_graph graph,
vx_image input,
vx_convolution conv,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[in]
conv  Thevx_int16
convolution matrix. 
[out]
output  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuConvolve
[Immediate] Computes a convolution on the input image with the supplied matrix.
vx_status vxuConvolve(
vx_context context,
vx_image input,
vx_convolution conv,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[in]
conv  Thevx_int16
convolution matrix. 
[out]
output  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.20. Data Object Copy
Copy a data object to another.
Copy data from an input data object into another data object. The input and output object must have the same object type and meta data. If these objects are object arrays, or pyramids then a deep copy shall be performed.
Functions
3.20.1. Functions
vxCopyNode
Copy data from one object to another.
vx_node vxCopyNode(
vx_graph graph,
vx_reference input,
vx_reference output);
Note
An implementation may optimize away the copy when virtual data objects are used. 
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input data object. 
[out]
output  The output data object with metadata identical to the input data object.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuCopy
[Immediate] Copy data from one object to another.
vx_status vxuCopy(
vx_context context,
vx_reference input,
vx_reference output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input data object. 
[out]
output  The output data object.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.21. Dilate Image
Implements Dilation, which grows the white space in a VX_DF_IMAGE_U8
Boolean image.
The output image dimensions should be the same as the dimensions of the
input image.
This kernel uses a 3x3 box around the output pixel used to determine value.
Note
For kernels that use other structuring patterns than 3x3 see

Functions
3.21.1. Functions
vxDilate3x3Node
[Graph] Creates a Dilation Image Node.
vx_node vxDilate3x3Node(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuDilate3x3
[Immediate] Dilates an image by a 3x3 window.
vx_status vxuDilate3x3(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.22. Equalize Histogram
Equalizes the histogram of a grayscale image. The output image dimensions should be the same as the dimensions of the input image.
This kernel uses Histogram Equalization to modify the values of a grayscale image so that it will automatically have a standardized brightness and contrast.
Functions
3.22.1. Functions
vxEqualizeHistNode
[Graph] Creates a Histogram Equalization node.
vx_node vxEqualizeHistNode(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The grayscale input image inVX_DF_IMAGE_U8
. 
[out]
output  The grayscale output image of typeVX_DF_IMAGE_U8
with equalized brightness and contrast and same size as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuEqualizeHist
[Immediate] Equalizes the Histogram of a grayscale image.
vx_status vxuEqualizeHist(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The grayscale input image inVX_DF_IMAGE_U8

[out]
output  The grayscale output image of typeVX_DF_IMAGE_U8
with equalized brightness and contrast.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.23. Erode Image
Implements Erosion, which shrinks the white space in a
VX_DF_IMAGE_U8
Boolean image.
The output image dimensions should be the same as the dimensions of the
input image.
This kernel uses a 3x3 box around the output pixel used to determine value.
Note
For kernels that use other structuring patterns than 3x3 see

Functions
3.23.1. Functions
vxErode3x3Node
[Graph] Creates an Erosion Image Node.
vx_node vxErode3x3Node(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuErode3x3
[Immediate] Erodes an image by a 3x3 window.
vx_status vxuErode3x3(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.24. Fast Corners
Computes the corners in an image using a method based upon FAST9 algorithm suggested in [Rosten2006] and with some updates from [Rosten2008] with modifications described below.
It extracts corners by evaluating pixels on the Bresenham circle around a candidate point. If N contiguous pixels are brighter than the candidate point by at least a threshold value t or darker by at least t, then the candidate point is considered to be a corner. For each detected corner, its strength is computed. Optionally, a nonmaxima suppression step is applied on all detected corners to remove multiple or spurious responses.
3.24.1. Segment Test Detector
The FAST corner detector uses the pixels on a Bresenham circle of radius 3 (16 pixels) to classify whether a candidate point p is actually a corner, given the following variables.
The two conditions for FAST corner detection can be expressed as:

C1: A set of N contiguous pixels S, ∀ x in S, I_{x} > I_{p} + t

C2: A set of N contiguous pixels S, ∀ x in S, I_{x} < I_{p}  t
So when either of these two conditions is met, the candidate p is classified as a corner.
In this version of the FAST algorithm, the minimum number of contiguous pixels N is 9 (FAST9).
The value of the intensity difference threshold strength_thresh.
of type VX_TYPE_FLOAT32
must be within:

UINT8_MIN
< t <UINT8_MAX
These limits are established due to the input data type
VX_DF_IMAGE_U8
.
Corner Strength Computation:
Once a corner has been detected, its strength (response, saliency, or score) shall be computed if nonmax_suppression is set to true, otherwise the value of strength is undefined. The corner response C_{p} function is defined as the largest threshold t for which the pixel p remains a corner.
Nonmaximum suppression:
If the nonmax_suppression flag is true, a nonmaxima suppression step is applied on the detected corners. The corner with coordinates (x,y) is kept if and only if
See http://www.edwardrosten.com/work/fast.html and http://en.wikipedia.org/wiki/Features_from_accelerated_segment_test
Functions
3.24.2. Functions
vxFastCornersNode
[Graph] Creates a FAST Corners Node.
vx_node vxFastCornersNode(
vx_graph graph,
vx_image input,
vx_scalar strength_thresh,
vx_bool nonmax_suppression,
vx_array corners,
vx_scalar num_corners);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
strength_thresh  Threshold on difference between intensity of the central pixel and pixels on Bresenham’s circle of radius 3 (VX_TYPE_FLOAT32
scalar), with a value in the range of 0.0 ≤ strength_thresh < 256.0. Any fractional value will be truncated to an integer. 
[in]
nonmax_suppression  If true, nonmaximum suppression is applied to detected corners before being placed in thevx_array
ofVX_TYPE_KEYPOINT
objects. 
[out]
corners  Output cornervx_array
ofVX_TYPE_KEYPOINT
. The order of the keypoints in this array is implementation dependent. 
[out]
num_corners  [optional] The total number of detected corners in image. Use aVX_TYPE_SIZE
scalar.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuFastCorners
[Immediate] Computes corners on an image using FAST algorithm and produces the array of feature points.
vx_status vxuFastCorners(
vx_context context,
vx_image input,
vx_scalar strength_thresh,
vx_bool nonmax_suppression,
vx_array corners,
vx_scalar num_corners);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
strength_thresh  Threshold on difference between intensity of the central pixel and pixels on Bresenham’s circle of radius 3 (VX_TYPE_FLOAT32
scalar), with a value in the range of 0.0 ≤ strength_thresh < 256.0. Any fractional value will be truncated to an integer. 
[in]
nonmax_suppression  If true, nonmaximum suppression is applied to detected corners before being places in thevx_array
ofVX_TYPE_KEYPOINT
structs. 
[out]
corners  Output cornervx_array
ofVX_TYPE_KEYPOINT
. The order of the keypoints in this array is implementation dependent. 
[out]
num_corners  [optional] The total number of detected corners in image. Use aVX_TYPE_SIZE
scalar.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.25. Gaussian Filter
Computes a Gaussian filter over a window of the input image. The output image dimensions should be the same as the dimensions of the input image.
This filter uses the following convolution matrix:
Functions
3.25.1. Functions
vxGaussian3x3Node
[Graph] Creates a Gaussian Filter Node.
vx_node vxGaussian3x3Node(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuGaussian3x3
[Immediate] Computes a gaussian filter on the image by a 3x3 window.
vx_status vxuGaussian3x3(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.26. Gaussian Image Pyramid
Computes a Gaussian Image Pyramid from an input image.
This vision function creates the Gaussian image pyramid from the input image using the particular 5x5 Gaussian Kernel:
image to the next level using VX_INTERPOLATION_NEAREST_NEIGHBOR
.
For the Gaussian pyramid, level 0 shall always have the same resolution and
contents as the input image.
Pyramids configured with one of the following level scaling must be
supported:
Functions
3.26.1. Functions
vxGaussianPyramidNode
[Graph] Creates a node for a Gaussian Image Pyramid.
vx_node vxGaussianPyramidNode(
vx_graph graph,
vx_image input,
vx_pyramid gaussian);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
gaussian  The Gaussian pyramid withVX_DF_IMAGE_U8
to construct.
See also: Object: Pyramid
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuGaussianPyramid
[Immediate] Computes a Gaussian pyramid from an input image.
vx_status vxuGaussianPyramid(
vx_context context,
vx_image input,
vx_pyramid gaussian);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8

[out]
gaussian  The Gaussian pyramid withVX_DF_IMAGE_U8
to construct.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.27. HOG
Extracts Histogram of Oriented Gradients features from the input grayscale image.
The Histogram of Oriented Gradients (HOG) vision function is split into two
nodes vxHOGCellsNode
and vxHOGFeaturesNode
.
The specification of these nodes cover a subset of possible HOG
implementations.
The vxHOGCellsNode
calculates the gradient orientation histograms and
average gradient magnitudes for each of the cells.
The vxHOGFeaturesNode
uses the cell histograms and optionally the
average gradient magnitude of the cells to produce a HOG feature vector.
This involves grouping up the cell histograms into blocks which are then
normalized.
A moving window is applied to the input image and for each location the
block data associated with the window is concatenated to the HOG feature
vector.
Data Structures
Functions
3.27.1. Data Structures
vx_hog_t
The HOG descriptor structure.
typedef struct _vx_hog_t {
vx_int32 cell_width;
vx_int32 cell_height;
vx_int32 block_width;
vx_int32 block_height;
vx_int32 block_stride;
vx_int32 num_bins;
vx_int32 window_width;
vx_int32 window_height;
vx_int32 window_stride;
vx_float32 threshold;
} vx_hog_t;
3.27.2. Functions
vxHOGCellsNode
[Graph] Performs cell calculations for the average gradient magnitude and gradient orientation histograms.
vx_node vxHOGCellsNode(
vx_graph graph,
vx_image input,
vx_int32 cell_width,
vx_int32 cell_height,
vx_int32 num_bins,
vx_tensor magnitudes,
vx_tensor bins);
Firstly, the gradient magnitude and gradient orientation are computed for
each pixel in the input image.
Two 1D centred, point discrete derivative masks are applied to the input
image in the horizontal and vertical directions.
M_{h} = [1, 0, 1] and M_{v} = ^{T} G_{v} is the
result of applying mask M_{v} to the input image, and G_{h} is the
result of applying mask M_{h} to the input image.
The border mode used for the gradient calculation is implementation
dependent.
Its behavior should be similar to VX_BORDER_UNDEFINED
.
The gradient magnitudes and gradient orientations for each pixel are then
calculated in the following manner.

G(x,y) = sqrt(G_{v}(x,y)^{2} + G_{h}(x,y)^{2})

θ(x,y) = arctan(G_{v}(x,y), G_{h}(x,y))
where
Secondly, the gradient magnitudes and orientations are used to compute the bins output tensor and optional magnitudes output tensor. These tensors are computed on a cell level where the cells are rectangular in shape. The magnitudes tensor contains the average gradient magnitude for each cell.

\(magnitudes(c) = \frac{1}{(cell\_width \times cell\_height)} \sum_{w=0}^{cell\_width} \sum_{h=0}^{cell\_height} G_c(w,h)\)
where G_{c} is the gradient magnitudes related to cell c. The bins tensor contains histograms of gradient orientations for each cell. The gradient orientations at each pixel range from 0 to 360 degrees. These are quantised into a set of histogram bins based on the num_bins parameter. Each pixel votes for a specific cell histogram bin based on its gradient orientation. The vote itself is the pixel’s gradient magnitude.

\(bins(c, n) = \sum_{w=0}^{cell\_width} \sum_{h=0}^{cell\_height} G_c(w,h) \times 1[B_c(w,h,num\_bins) == n]\)
where B_{c} produces the histogram bin number based on the gradient orientation of the pixel at location (w,h) in cell c based on the num_bins and

1[B_{c}(w,h,num_bins) == n]
is a deltafunction with value 1 when B_{c}(w,h,num_bins) == n or 0 otherwise.
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image of typeVX_DF_IMAGE_U8
. 
[in]
cell_width  The histogram cell width of typeVX_TYPE_INT32
. 
[in]
cell_height  The histogram cell height of typeVX_TYPE_INT32
. 
[in]
num_bins  The histogram size of typeVX_TYPE_INT32
. 
[out]
magnitudes  (Optional) The output average gradient magnitudes per cell ofvx_tensor
of typeVX_TYPE_INT16
of size [floor(image_{width} / cell_{width}), floor(image_{height} / cell_{height})]. 
[out]
bins  The output gradient orientation histograms per cell ofvx_tensor
of typeVX_TYPE_INT16
of size [floor(image_{width} / cell_{width}), floor(image_{height} / cell_{height} ), num_{bins}].
Returns: vx_node
.
Return Values

0  Node could not be created.

*  Node handle.
vxHOGFeaturesNode
[Graph] The node produces HOG features for the W1xW2 window in a sliding window fashion over the whole input image. Each position produces a HOG feature vector.
vx_node vxHOGFeaturesNode(
vx_graph graph,
vx_image input,
vx_tensor magnitudes,
vx_tensor bins,
const vx_hog_t* params,
vx_size hog_param_size,
vx_tensor features);
Firstly if a magnitudes tensor is provided the cell histograms in the bins tensor are normalised by the average cell gradient magnitudes.

\(bins(c,n) = \frac{bins(c,n)}{magnitudes(c)}\)
To account for changes in illumination and contrast the cell histograms must be locally normalized which requires grouping the cell histograms together into larger spatially connected blocks. Blocks are rectangular grids represented by three parameters: the number of cells per block, the number of pixels per cell, and the number of bins per cell histogram (num_{bins}). These blocks typically overlap, meaning that each cell histogram contributes more than once to the final descriptor. To normalize a block its cell histograms h are grouped together to form a vector v = [h_{1}, h_{2}, h_{3}, …, h_{n}]. This vector is normalised using L2Hys which means performing L2norm on this vector; clipping the result (by limiting the maximum values of v to be threshold) and renormalizing again. If the threshold is equal to zero then L2Hys normalization is not performed.

\(L2norm(v) = \frac{v}{\sqrt{\v\_2^2 + \epsilon^2}}\)
where  v _{k} be its knorm for k=1, 2, and ε be a small constant. For a specific window its HOG descriptor is then the concatenated vector of the components of the normalized cell histograms from all of the block regions contained in the window. The W1xW2 window starting position is at coordinates 0x0. If the input image has dimensions that are not an integer multiple of W1xW2 blocks with the specified stride, then the last positions that contain only a partial W1xW2 window will be calculated with the remaining part of the W1xW2 window padded with zeroes. The Window W1xW2 must also have a size so that it contains an integer number of cells, otherwise the node is not welldefined. The final output tensor will contain HOG descriptors equal to the number of windows in the input image. The output features tensor has 3 dimensions, given by:

(⌊ (I_{w}  W_{w}) / W_{s} ⌋ + 1,

⌊ (I_{h}  W_{h}) / W_{s} ⌋ + 1,

⌊ (W_{w}  B_{w}) / B_{s} + 1 ⌋ × ⌊ (W_{h}  B_{h}) / B_{s} + 1 ⌋ × ( (B_{w} × B_{h}) / (C_{w} × C_{h}) ) × num_{bins})
where I, W, B, and C refer to the image, window, block, and cell respectively, and the subscripts w, h, and s select the width, height, and stride properties respectively.
See vxCreateTensor
and vxCreateVirtualTensor
.
We recommend the output tensors always be virtual objects, with this node
connected directly to the classifier.
The output tensor will be very large, and using nonvirtual tensors will
result in a poorly optimized implementation.
Merging of this node with a classifier node such as that described in the
classifier extension will result in better performance.
Notice that this node creation function has more parameters than the
corresponding kernel.
Numbering of kernel parameters (required if you create this node using the
generic interface) is explicitly specified here.
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image of typeVX_DF_IMAGE_U8
. (Kernel parameter #0) 
[in]
magnitudes  (Optional) The gradient magnitudes per cell ofvx_tensor
of typeVX_TYPE_INT16
. It is the output ofvxHOGCellsNode
. (Kernel parameter #1) 
[in]
bins  The gradient orientation histograms per cell ofvx_tensor
of typeVX_TYPE_INT16
. It is the output ofvxHOGCellsNode
. (Kernel parameter #2) 
[in]
params  The parameters of typevx_hog_t
. (Kernel parameter #3) 
[in]
hog_param_size  Size ofvx_hog_t
in bytes. Note that this parameter is not counted as one of the kernel parameters. 
[out]
features  The output HOG features ofvx_tensor
of typeVX_TYPE_INT16
. (Kernel parameter #4)
Returns: vx_node
.
Return Values

0  Node could not be created.

*  Node handle.
vxuHOGCells
[Immediate] Performs cell calculations for the average gradient magnitude and gradient orientation histograms.
vx_status vxuHOGCells(
vx_context context,
vx_image input,
vx_int32 cell_size,
vx_int32 num_bins,
vx_tensor magnitudes,
vx_tensor bins);
Firstly, the gradient magnitude and gradient orientation are computed for
each pixel in the input image.
Two 1D centred, point discrete derivative masks are applied to the input
image in the horizontal and vertical directions.
M_{h} = [1, 0, 1] and M_{v} = ^{T} G_{v} is the
result of applying mask M_{v} to the input image, and G_{h} is the
result of applying mask M_{h} to the input image.
The border mode used for the gradient calculation is implementation
dependent.
Its behavior should be similar to VX_BORDER_UNDEFINED
.
The gradient magnitudes and gradient orientations for each pixel are then
calculated in the following manner.

G(x,y) = sqrt(G_{v}(x,y)^{2} + G_{h}(x,y)^{2})

θ(x,y) = arctan(G_{v}(x,y), G_{h}(x,y))
where
Secondly, the gradient magnitudes and orientations are used to compute the bins output tensor and optional magnitudes output tensor. These tensors are computed on a cell level where the cells are rectangular in shape. The magnitudes tensor contains the average gradient magnitude for each cell.

\(magnitudes(c) = \frac{1}{(cell\_width \times cell\_height)} \sum_{w=0}^{cell\_width} \sum_{h=0}^{cell\_height} G_c(w,h)\)
where G_{c} is the gradient magnitudes related to cell c. The bins tensor contains histograms of gradient orientations for each cell. The gradient orientations at each pixel range from 0 to 360 degrees. These are quantised into a set of histogram bins based on the num_bins parameter. Each pixel votes for a specific cell histogram bin based on its gradient orientation. The vote itself is the pixel’s gradient magnitude.

\(bins(c, n) = \sum_{w=0}^{cell\_width} \sum_{h=0}^{cell\_height} G_c(w,h) \times 1[B_c(w,h,num\_bins) == n]\)
where B_{c} produces the histogram bin number based on the gradient orientation of the pixel at location (w,h) in cell c based on the num_bins and

1[B_{c}(w,h,num_bins) == n]
is a deltafunction with value 1 when B_{c}(w,h,num_bins) == n or 0 otherwise.
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image of typeVX_DF_IMAGE_U8
. 
[in]
cell_width  The histogram cell width of typeVX_TYPE_INT32
. 
[in]
cell_height  The histogram cell height of typeVX_TYPE_INT32
. 
[in]
num_bins  The histogram size of typeVX_TYPE_INT32
. 
[out]
magnitudes  The output average gradient magnitudes per cell ofvx_tensor
of typeVX_TYPE_INT16
of size [floor(image_{width} / cell_{width}), floor(image_{height} / cell_{height})]. 
[out]
bins  The output gradient orientation histograms per cell ofvx_tensor
of typeVX_TYPE_INT16
of size [floor(image_{width} / cell_{width}), floor(image_{height} / cell_{height}), num_{bins}].
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
vxuHOGFeatures
[Immediate] Computes Histogram of Oriented Gradients features for the W1xW2 window in a sliding window fashion over the whole input image.
vx_status vxuHOGFeatures(
vx_context context,
vx_image input,
vx_tensor magnitudes,
vx_tensor bins,
const vx_hog_t* params,
vx_size hog_param_size,
vx_tensor features);
Firstly if a magnitudes tensor is provided the cell histograms in the bins tensor are normalised by the average cell gradient magnitudes.

\(bins(c,n) = \frac{bins(c,n)}{magnitudes(c)}\)
To account for changes in illumination and contrast the cell histograms must be locally normalized which requires grouping the cell histograms together into larger spatially connected blocks. Blocks are rectangular grids represented by three parameters: the number of cells per block, the number of pixels per cell, and the number of bins per cell histogram (num_{bins}). These blocks typically overlap, meaning that each cell histogram contributes more than once to the final descriptor. To normalize a block its cell histograms h are grouped together to form a vector v = [h_{1}, h_{2}, h_{3}, …, h_{n}]. This vector is normalised using L2Hys which means performing L2norm on this vector; clipping the result (by limiting the maximum values of v to be threshold) and renormalizing again. If the threshold is equal to zero then L2Hys normalization is not performed.

\(L2norm(v) = \frac{v}{\sqrt{\v\_2^2 + \epsilon^2}}\)
where  v _{k} be its knorm for k=1, 2, and ε be a small constant. For a specific window its HOG descriptor is then the concatenated vector of the components of the normalized cell histograms from all of the block regions contained in the window. The W1xW2 window starting position is at coordinates 0x0. If the input image has dimensions that are not an integer multiple of W1xW2 blocks with the specified stride, then the last positions that contain only a partial W1xW2 window will be calculated with the remaining part of the W1xW2 window padded with zeroes. The Window W1xW2 must also have a size so that it contains an integer number of cells, otherwise the node is not welldefined. The final output tensor will contain HOG descriptors equal to the number of windows in the input image. The output features tensor has 3 dimensions, given by:

(⌊ (I_{w}  W_{w}) / W_{s} ⌋ + 1,

⌊ (I_{h}  W_{h}) / W_{s} ⌋ + 1,

⌊ (W_{w}  B_{w}) / B_{s} + 1 ⌋ × ⌊ (W_{h}  B_{h}) / B_{s} + 1 ⌋ × ( (B_{w} × B_{h}) / (C_{w} × C_{h}) ) × num_{bins})
where I, W, B, and C refer to the image, window, block, and cell respectively, and the subscripts w, h, and s select the width, height, and stride properties respectively.
See vxCreateTensor
and vxCreateVirtualTensor
.
The output tensor from this function may be very large.
For this reason, is it not recommended that this “immediate mode” version
of the function be used.
The preferred method to perform this function is as graph node with a
virtual tensor as the output.
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image of typeVX_DF_IMAGE_U8
. 
[in]
magnitudes  The averge gradient magnitudes per cell ofvx_tensor
of typeVX_TYPE_INT16
. It is the output ofvxuHOGCells
. 
[in]
bins  The gradient orientation histogram per cell ofvx_tensor
of typeVX_TYPE_INT16
. It is the output ofvxuHOGCells
. 
[in]
params  The parameters of typevx_hog_t
. 
[in]
hog_param_size  Size ofvx_hog_t
in bytes. 
[out]
features  The output HOG features ofvx_tensor
of typeVX_TYPE_INT16
.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.28. Harris Corners
Computes the Harris Corners of an image.
The Harris Corners are computed with several parameters
The computation to find the corner values or scores can be summarized as:

where V_{c} is the thresholded corner value.
The normalized Sobel kernels used for the gradient computation shall be as shown below:

For gradient size 3:
\[\mathbf{Sobel}_x(Normalized)= \frac{1}{4 \times 255 \times b} \times \begin{bmatrix} 1 & 0 & 1 \\ 2 & 0 & 2 \\ 1 & 0 & 1 \end{bmatrix}\]\[\mathbf{Sobel}_y(Normalized) = \frac{1}{4 \times 255 \times b} \times transpose({sobel}_x) = \frac{1}{4 \times 255 \times b} \times \begin{bmatrix} 1 & 2 & 1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{bmatrix}\] 
For gradient size 5:
\[\mathbf{Sobel}_x(Normalized) = \frac{1}{16 \times 255 \times b} \times \begin{bmatrix} 1 & 2 & 0 & 2 & 1 \\ 4 & 8 & 0 & 8 & 4 \\ 6 & 12 & 0 & 12 & 6 \\ 4 & 8 & 0 & 8 & 4 \\ 1 & 2 & 0 & 2 & 1 \\ \end{bmatrix}\]\[\mathbf{Sobel}_y(Normalized)= \frac{1}{16 \times 255 \times b} \times transpose({sobel}_x)\] 
For gradient size 7:
\[\mathbf{Sobel}_x(Normalized)= \frac{1}{64 \times 255 \times b} \times \begin{bmatrix} 1 & 4 & 5 & 0 & 5 & 4 & 1 \\ 6 & 24 & 30 & 0 & 30 & 24 & 6 \\ 15 & 60 & 75 & 0 & 75 & 60 & 15 \\ 20 & 80 & 100 & 0 & 100 & 80 & 20 \\ 15 & 60 & 75 & 0 & 75 & 60 & 15 \\ 6 & 24 & 30 & 0 & 30 & 24 & 6 \\ 1 & 4 & 5 & 0 & 5 & 4 & 1 \\ \end{bmatrix}\]\[\mathbf{Sobel}_y(Normalized)= \frac{1}{64*255*b} \times transpose({sobel}_x)\]
V_{c} is then nonmaximally suppressed, returning the same results as using the following algorithm:

Filter the features using the nonmaximum suppression algorithm defined for
vxFastCornersNode
. 
Create an array of features sorted by V_{c} in descending order: V_{c}(j) > V_{c}(j+1).

Initialize an empty feature set F = {}

For each feature j in the sorted array, while V_{c}(j) > T_{c}:

If there is no feature i in F such that the Euclidean distance between pixels i and j is less than r, add the feature j to the feature set F.

An implementation shall support all values of Euclidean distance r
that satisfy: 0 ≤ max_dist ≤ 30 The feature set F is
returned as a vx_array
of vx_keypoint_t
structs.
Functions
3.28.1. Functions
vxHarrisCornersNode
[Graph] Creates a Harris Corners Node.
vx_node vxHarrisCornersNode(
vx_graph graph,
vx_image input,
vx_scalar strength_thresh,
vx_scalar min_distance,
vx_scalar sensitivity,
vx_int32 gradient_size,
vx_int32 block_size,
vx_array corners,
vx_scalar num_corners);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
strength_thresh  TheVX_TYPE_FLOAT32
minimum threshold with which to eliminate Harris Corner scores (computed using the normalized Sobel kernel). 
[in]
min_distance  TheVX_TYPE_FLOAT32
radial Euclidean distance for nonmaximum suppression. 
[in]
sensitivity  TheVX_TYPE_FLOAT32
scalar sensitivity threshold k from the HarrisStephens equation. 
[in]
gradient_size  The gradient window size to use on the input. The implementation must support at least 3, 5, and 7. 
[in]
block_size  The block window size used to compute the Harris Corner score. The implementation must support at least 3, 5, and 7. 
[out]
corners  The array ofVX_TYPE_KEYPOINT
objects. The order of the keypoints in this array is implementation dependent. 
[out]
num_corners  [optional] The total number of detected corners in image. Use aVX_TYPE_SIZE
scalar.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuHarrisCorners
[Immediate] Computes the Harris Corners over an image and produces the array of scored points.
vx_status vxuHarrisCorners(
vx_context context,
vx_image input,
vx_scalar strength_thresh,
vx_scalar min_distance,
vx_scalar sensitivity,
vx_int32 gradient_size,
vx_int32 block_size,
vx_array corners,
vx_scalar num_corners);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
strength_thresh  TheVX_TYPE_FLOAT32
minimum threshold which to eliminate Harris Corner scores (computed using the normalized Sobel kernel). 
[in]
min_distance  TheVX_TYPE_FLOAT32
radial Euclidean distance for nonmaximum suppression. 
[in]
sensitivity  TheVX_TYPE_FLOAT32
scalar sensitivity threshold k from the HarrisStephens equation. 
[in]
gradient_size  The gradient window size to use on the input. The implementation must support at least 3, 5, and 7. 
[in]
block_size  The block window size used to compute the harris corner score. The implementation must support at least 3, 5, and 7. 
[out]
corners  The array ofVX_TYPE_KEYPOINT
structs. The order of the keypoints in this array is implementation dependent. 
[out]
num_corners  [optional] The total number of detected corners in image. Use aVX_TYPE_SIZE
scalar
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.29. Histogram
Generates a distribution from an image.
This kernel counts the number of occurrences of each pixel value within the window size of a precalculated number of bins. A pixel with intensity I will result in incrementing histogram bin i where

i = (I  offset) × (numBins / range), I ≥ offset, I < offset + range
Pixels with intensities that don’t meet these conditions will have no effect
on the histogram.
Here offset, range and numBins are values of histogram attributes (see
VX_DISTRIBUTION_OFFSET
, VX_DISTRIBUTION_RANGE
,
VX_DISTRIBUTION_BINS
).
Functions
3.29.1. Functions
vxHistogramNode
[Graph] Creates a Histogram node.
vx_node vxHistogramNode(
vx_graph graph,
vx_image input,
vx_distribution distribution);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
. 
[out]
distribution  The output distribution.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuHistogram
[Immediate] Generates a distribution from an image.
vx_status vxuHistogram(
vx_context context,
vx_image input,
vx_distribution distribution);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8

[out]
distribution  The output distribution.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.30. HoughLinesP
Finds the Probabilistic Hough Lines detected in the input binary image.
The node implement the Progressive Probabilistic Hough Transform described in Matas, J. and Galambos, C. and Kittler, J.V., Robust Detection of Lines Using the Progressive Probabilistic Hough Transform. CVIU 78 1, pp 119137 (2000). The linear Hough transform algorithm uses a twodimensional array, called an accumulator, to detect the existence of a line described by r = x cos θ + y sin θ. The dimension of the accumulator equals the number of unknown parameters, i.e., two, considering quantized values of r and θ in the pair (r,θ). For each pixel at (x,y) and its neighbourhood, the Hough transform algorithm determines if there is enough evidence of a straight line at that pixel. If so, it will calculate the parameters (r,θ) of that line, and then look for the accumulator’s bin that the parameters fall into, and increment the value of that bin.
Algorithm Outline:

Check the input image; if it is empty then finish.

Update the accumulator with a single pixel randomly selected from the input image.

Remove the selected pixel from input image.

Check if the highest peak in the accumulator that was modified by the new pixel is higher than threshold. If not then goto 1.

Look along a corridor specified by the peak in the accumulator, and find the longest segment that either is continuous or exhibits a gap not exceeding a given threshold.

Remove the pixels in the segment from input image.

“Unvote” from the accumulator all the pixels from the line that have previously voted.

If the line segment is longer than the minimum length add it into the output list.

Goto 1
each line is stored in vx_line2d_t
struct such that start_x ≤
end_x.
Data Structures
Functions
3.30.1. Data Structures
vx_hough_lines_p_t
Hough lines probability parameters.
typedef struct _vx_hough_lines_p_t {
vx_float32 rho;
vx_float32 theta;
vx_int32 threshold;
vx_int32 line_length;
vx_int32 line_gap;
vx_float32 theta_max;
vx_float32 theta_min;
} vx_hough_lines_p_t;
3.30.2. Functions
vxHoughLinesPNode
[Graph] Finds the Probabilistic Hough Lines detected in the input binary image, each line is stored in the output array as a set of points (x1, y1, x2, y2) .
vx_node vxHoughLinesPNode(
vx_graph graph,
vx_image input,
const vx_hough_lines_p_t* params,
vx_array lines_array,
vx_scalar num_lines);
Some implementations of the algorithm may have a random or nondeterministic element. If the target application is in a safetycritical environment this should be borne in mind and steps taken in the implementation, the application or both to achieve the level of determinism required by the system design.
Parameters

[in]
graph  graph handle 
[in]
input  8 bit, single channel binary source image 
[in]
params  parameters of the structvx_hough_lines_p_t

[out]
lines_array  lines_array contains array of lines, seevx_line2d_t
The order of lines in implementation dependent 
[out]
num_lines  [optional] The total number of detected lines in image. Use aVX_TYPE_SIZE
scalar
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuHoughLinesP
[Immediate] Finds the Probabilistic Hough Lines detected in the input binary image, each line is stored in the output array as a set of points (x1, y1, x2, y2) .
vx_status vxuHoughLinesP(
vx_context context,
vx_image input,
const vx_hough_lines_p_t* params,
vx_array lines_array,
vx_scalar num_lines);
Some implementations of the algorithm may have a random or nondeterministic element. If the target application is in a safetycritical environment this should be borne in mind and steps taken in the implementation, the application or both to achieve the level of determinism required by the system design.
Parameters

[in]
context  The reference to the overall context. 
[in]
input  8 bit, single channel binary source image 
[in]
params  parameters of the structvx_hough_lines_p_t

[out]
lines_array  lines_array contains array of lines, seevx_line2d_t
The order of lines in implementation dependent 
[out]
num_lines  [optional] The total number of detected lines in image. Use aVX_TYPE_SIZE
scalar.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.31. Integral Image
Computes the integral image of the input. The output image dimensions should be the same as the dimensions of the input image.
Each output pixel is the sum of the corresponding input pixel and all other pixels above and to the left of it.

dst(x,y) = sum(x,y)
where, for x ≥ 0 and y ≥ 0

sum(x,y) = src(x,y) + sum(x1,y) + sum(x,y1)  sum(x1,y1)
otherwise,

sum(x,y) = 0
The overflow policy used is VX_CONVERT_POLICY_WRAP
.
Functions
3.31.1. Functions
vxIntegralImageNode
[Graph] Creates an Integral Image Node.
vx_node vxIntegralImageNode(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U32
format, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuIntegralImage
[Immediate] Computes the integral image of the input.
vx_status vxuIntegralImage(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U32
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.32. LBP
Extracts LBP image from an input image. The output image dimensions should be the same as the dimensions of the input image.
The function calculates one of the following LBP descriptors: Local Binary Pattern, Modified Local Binary Pattern, or Uniform Local Binary Pattern.
Local binary pattern is defined as: Each pixel (y,x) generate an 8 bit value describing the local binary pattern around the pixel, by comparing the pixel value with its 8 neighbours (selected neighbours of the 3x3 or 5x5 window).
We will define the pixels for the 3x3 neighbourhood as:
and the pixels in a 5x5 neighbourhood as:
We also define the sign difference function:
Using the above definitions. The LBP image is defined in the following equation:

\(DstImg[y,x] = \sum_{p=0}^{7} s(g_p  g_c)2^p\)
For modified local binary pattern. Each pixel (y,x) generate an 8 bit value describing the modified local binary pattern around the pixel, by comparing the average of 8 neighbour pixels with its 8 neighbours (5x5 window).
The uniform LBP patterns refer to the patterns which have limited transition or discontinuities (smaller then 2 or equal) in the circular binary presentation.
For each pixel (y,x) a value is generated, describing the transition around the pixel (If there are up to 2 transitions between 0 to 1 or 1 to 0). And an additional value for all other local binary pattern values. We can define the function that measure transition as:

\(U = s(g_7  g_c)  s(g_0  g_c) + \sum_{p=1}^{7} s(g_p  g_c)  s(g_{p1}  g_c)\)
With the above definitions, the unified LBP equation is defined as.
Enumerations
Functions
3.32.1. Enumerations
vx_lbp_format_e
Local binary pattern supported.
enum vx_lbp_format_e {
VX_LBP = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_LBP_FORMAT ) + 0x0,
VX_MLBP = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_LBP_FORMAT ) + 0x1,
VX_ULBP = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_LBP_FORMAT ) + 0x2,
};
Enumerator
3.32.2. Functions
vxLBPNode
[Graph] Creates a node that extracts LBP image from an input image
vx_node vxLBPNode(
vx_graph graph,
vx_image in,
vx_enum format,
vx_int8 kernel_size,
vx_image out);
Parameters

[in]
graph  The reference to the graph. 
[in]
in  An input image invx_image
. Or SrcImg in the equations. The image is of typeVX_DF_IMAGE_U8

[in]
format  A variation of LBP like original LBP and mLBP. Seevx_lbp_format_e

[in]
kernel_size  Kernel size. Only size of 3 and 5 are supported 
[out]
out  An output image invx_image
. Or DstImg in the equations. The image is of typeVX_DF_IMAGE_U8
with the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuLBP
[Immediate] The function extracts LBP image from an input image
vx_status vxuLBP(
vx_context context,
vx_image in,
vx_enum format,
vx_int8 kernel_size,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in  An input image invx_image
. Or SrcImg in the equations. the image is of typeVX_DF_IMAGE_U8

[in]
format  A variation of LBP like original LBP and mLBP. seevx_lbp_format_e

[in]
kernel_size  Kernel size. Only size of 3 and 5 are supported 
[out]
out  An output image invx_image
. Or DstImg in the equations. The image is of typeVX_DF_IMAGE_U8
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.33. Laplacian Image Pyramid
Computes a Laplacian Image Pyramid from an input image.
This vision function creates the Laplacian image pyramid from the input
image.
First, a Gaussian pyramid is created with the scale attribute
VX_SCALE_PYRAMID_HALF
and the number of levels equal to N+1,
where N is the number of levels in the laplacian pyramid.
The border mode for the Gaussian pyramid calculation should be
VX_BORDER_REPLICATE
.
Then, for each i = 0 … N1, the Laplacian level L_{i} is
computed as:

L_{i} = G_{i}  UpSample(G_{i+1}).
Here G_{i} is the ith level of the Gaussian pyramid.
UpSample(I) is computed by injecting even zero rows and columns and then convolves the result with the Gaussian 5x5 filter multiplied by 4.

\(UpSample(I)_{x,y} = 4 \sum_{k=2}^{2} \sum_{l=2}^{2} I_{xk,yl}^{'} W_{k+2,l+2}\)
L_{0} shall always have the same resolution as the input image. The output image is equal to G_{N}.
The border mode for the UpSample calculation should be
VX_BORDER_REPLICATE
.
Functions
3.33.1. Functions
vxLaplacianPyramidNode
[Graph] Creates a node for a Laplacian Image Pyramid.
vx_node vxLaplacianPyramidNode(
vx_graph graph,
vx_image input,
vx_pyramid laplacian,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[out]
laplacian  The Laplacian pyramid withVX_DF_IMAGE_S16
to construct. 
[out]
output  The lowest resolution image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format necessary to reconstruct the input image from the pyramid. The output image format should be same as input image format.
See also: Object: Pyramid
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuLaplacianPyramid
[Immediate] Computes a Laplacian pyramid from an input image.
vx_status vxuLaplacianPyramid(
vx_context context,
vx_image input,
vx_pyramid laplacian,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[out]
laplacian  The Laplacian pyramid withVX_DF_IMAGE_S16
to construct. 
[out]
output  The lowest resolution image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format necessary to reconstruct the input image from the pyramid. The output image format should be same as input image format.
See also: Object: Pyramid
Returns: A vx_status
enumeration.
Return Values

VX_SUCCESS
 Success. 
*  An error occured. See
vx_status_e
3.34. Magnitude
Implements the Gradient Magnitude Computation Kernel. The output image dimensions should be the same as the dimensions of the input images.
This kernel takes two gradients in VX_DF_IMAGE_S16
format and computes
the VX_DF_IMAGE_S16
normalized magnitude.
Magnitude is computed as:

\(mag(x,y) = \sqrt{grad_x(x,y)^2 + grad_y(x,y)^2}\)
The conceptual definition describing the overflow is given as:
uint16 z = uint16( sqrt( double( uint32( int32(x) * int32(x) ) + uint32( int32(y) * int32(y) ) ) ) + 0.5);
int16 mag = z > 32767 ? 32767 : z;
Functions
3.34.1. Functions
vxMagnitudeNode
[Graph] Create a Magnitude node.
vx_node vxMagnitudeNode(
vx_graph graph,
vx_image grad_x,
vx_image grad_y,
vx_image mag);
Parameters

[in]
graph  The reference to the graph. 
[in]
grad_x  The input x image. This must be inVX_DF_IMAGE_S16
format. 
[in]
grad_y  The input y image. This must be inVX_DF_IMAGE_S16
format. 
[out]
mag  The magnitude image. This is inVX_DF_IMAGE_S16
format. Must have the same dimensions as the input image.
See also: VX_KERNEL_MAGNITUDE
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuMagnitude
[Immediate] Invokes an immediate Magnitude.
vx_status vxuMagnitude(
vx_context context,
vx_image grad_x,
vx_image grad_y,
vx_image mag);
Parameters

[in]
context  The reference to the overall context. 
[in]
grad_x  The input x image. This must be inVX_DF_IMAGE_S16
format. 
[in]
grad_y  The input y image. This must be inVX_DF_IMAGE_S16
format. 
[out]
mag  The magnitude image. This will be inVX_DF_IMAGE_S16
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.35. MatchTemplate
Compares an image template against overlapped image regions.
The detailed equation to the matching can be found in
vx_comp_metric_e
.
The output of the template matching node is a comparison map.
The output comparison map should be the same size as the input image.
The template image size (width*height) shall not be larger than 65535.
If the valid region of the template image is smaller than the entire
template image, the result in the destination image is
implementationdependent.
Enumerations
Functions
3.35.1. Enumerations
vx_comp_metric_e
comparing metrics.
enum vx_comp_metric_e {
VX_COMPARE_HAMMING = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_COMP_METRIC ) + 0x0,
VX_COMPARE_L1 = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_COMP_METRIC ) + 0x1,
VX_COMPARE_L2 = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_COMP_METRIC ) + 0x2,
VX_COMPARE_CCORR = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_COMP_METRIC ) + 0x3,
VX_COMPARE_L2_NORM = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_COMP_METRIC ) + 0x4,
VX_COMPARE_CCORR_NORM = VX_ENUM_BASE( VX_ID_KHRONOS, VX_ENUM_COMP_METRIC ) + 0x5,
};
In all the equations below w and h are width and height of the template image respectively. R is the compare map. T is the template image. I is the image on which the template is searched.
Enumerator

VX_COMPARE_HAMMING
 hamming distance
\(R(x,y) = \frac{1}{w*h}\sum_{\grave{x},\grave{y}}^{w,h} XOR(T(\grave{x},\grave{y}),I(x+\grave{x},y+\grave{y}))\) 
VX_COMPARE_L1
 L1 distance
\(R(x,y) = \frac{1}{w*h}\sum_{\grave{x},\grave{y}}^{w,h} ABS(T(\grave{x},\grave{y})  I(x+\grave{x},y+\grave{y}))\). 
VX_COMPARE_L2
 L2 distance, normalized by image size
\(R(x,y) = \frac{1}{w*h}\sum_{\grave{x},\grave{y}}^{w,h} (T(\grave{x},\grave{y})  I(x+\grave{x},y+\grave{y}))^2\). 
VX_COMPARE_CCORR
 cross correlation distance
\(R(x,y) = \frac{1}{w*h}\sum_{\grave{x},\grave{y}}^{w,h} (T(\grave{x},\grave{y})*I(x+\grave{x},y+\grave{y}))\) 
VX_COMPARE_L2_NORM
 L2 normalized distance
\(R(x,y) = \frac{\sum_{\grave{x},\grave{y}}^{w,h} (T(\grave{x},\grave{y})  I(x+\grave{x},y+\grave{y}))^2} {\sqrt{\sum_{\grave{x},\grave{y}}^{w,h} T(\grave{x},\grave{y})^2 * I(x+\grave{x},y+\grave{y})^2}}\). 
VX_COMPARE_CCORR_NORM
 cross correlation normalized distance
\(R(x,y) = \frac{\sum_{\grave{x},\grave{y}}^{w,h} T(\grave{x},\grave{y}) * I(x+\grave{x},y+\grave{y})*2^{15}} {\sqrt{\sum_{\grave{x},\grave{y}}^{w,h} T(\grave{x},\grave{y})^2 * I(x+\grave{x},y+\grave{y})^2}}\)
3.35.2. Functions
vxMatchTemplateNode
[Graph] The Node Compares an image template against overlapped image regions.
vx_node vxMatchTemplateNode(
vx_graph graph,
vx_image src,
vx_image templateImage,
vx_enum matchingMethod,
vx_image output);
The detailed equation to the matching can be found in
vx_comp_metric_e
.
The output of the template matching node is a comparison map as described in
vx_comp_metric_e
.
The Node have a limitation on the template image size (width*height).
It should not be larger then 65535.
If the valid region of the template image is smaller than the entire
template image, the result in the destination image is
implementationdependent.
Parameters

[in]
graph  The reference to the graph. 
[in]
src  The input image of typeVX_DF_IMAGE_U8
. 
[in]
templateImage  Searched template of typeVX_DF_IMAGE_U8
. 
[in]
matchingMethod  attribute specifying the comparison methodvx_comp_metric_e
. This function support onlyVX_COMPARE_CCORR_NORM
andVX_COMPARE_L2
. 
[out]
output  Map of comparison results. The output is an image of typeVX_DF_IMAGE_S16
.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuMatchTemplate
[Immediate] The function compares an image template against overlapped image regions.
vx_status vxuMatchTemplate(
vx_context context,
vx_image src,
vx_image templateImage,
vx_enum matchingMethod,
vx_image output);
The detailed equation to the matching can be found in
vx_comp_metric_e
.
The output of the template matching node is a comparison map as described in
vx_comp_metric_e
.
The Node have a limitation on the template image size (width*height).
It should not be larger then 65535.
If the valid region of the template image is smaller than the entire
template image, the result in the destination image is
implementationdependent.
Parameters

[in]
context  The reference to the overall context. 
[in]
src  The input image of typeVX_DF_IMAGE_U8
. 
[in]
templateImage  Searched template of typeVX_DF_IMAGE_U8
. 
[in]
matchingMethod  attribute specifying the comparison methodvx_comp_metric_e
. This function support onlyVX_COMPARE_CCORR_NORM
andVX_COMPARE_L2
. 
[out]
output  Map of comparison results. The output is an image of typeVX_DF_IMAGE_S16
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.36. Max
Implements a pixelwise maximum kernel. The output image dimensions should be the same as the dimensions of the input image.
Performing a pixelwise maximum on a VX_DF_IMAGE_U8
images or
VX_DF_IMAGE_S16
.
All data types of the input and output images must match.

out[i,j] = (in1[i,j] > in2[i,j] ? in1[i,j] : in2[i,j])
Functions
3.36.1. Functions
vxMaxNode
[Graph] Creates a pixelwise maximum kernel.
vx_node vxMaxNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
graph  The reference to the graph where to create the node. 
[in]
in1  The first input image. Must be of typeVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
in2  The second input image. Must be of typeVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[out]
out  The output image which will hold the result of max and will have the same type and dimensions of the imput images.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuMax
[Immediate] Computes pixelwise maximum values between two images.
vx_status vxuMax(
vx_context context,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  The first input image. Must be of typeVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
in2  The second input image. Must be of typeVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[out]
out  The output image which will hold the result of max.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.37. Mean and Standard Deviation
Computes the mean pixel value and the standard deviation of the pixels in the input image (which has a dimension width and height).
The mean value is computed as:

\(\mu = \frac{\left(\sum_{y=0}^h \sum_{x=0}^w src(x,y) \right)} {(width \times height)}\)
The standard deviation is computed as:

\(\sigma = \sqrt{\frac{\left(\sum_{y=0}^h \sum_{x=0}^w (\mu  src(x,y))^2 \right)} {(width \times height)}}\)
Functions
3.37.1. Functions
vxMeanStdDevNode
[Graph] Creates a mean value and optionally, a standard deviation node.
vx_node vxMeanStdDevNode(
vx_graph graph,
vx_image input,
vx_scalar mean,
vx_scalar stddev);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image.VX_DF_IMAGE_U8
is supported. 
[out]
mean  TheVX_TYPE_FLOAT32
average pixel value. 
[out]
stddev  [optional] TheVX_TYPE_FLOAT32
standard deviation of the pixel values.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuMeanStdDev
[Immediate] Computes the mean value and optionally the standard deviation.
vx_status vxuMeanStdDev(
vx_context context,
vx_image input,
vx_float32* mean,
vx_float32* stddev);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image.VX_DF_IMAGE_U8
is supported. 
[out]
mean  The average pixel value. 
[out]
stddev  [optional] The standard deviation of the pixel values.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.38. Median Filter
Computes a median pixel value over a window of the input image. The output image dimensions should be the same as the dimensions of the input image.
The median is the middle value over an oddnumbered, sorted range of values.
Note
For kernels that use other structuring patterns than 3x3 see

Functions
3.38.1. Functions
vxMedian3x3Node
[Graph] Creates a Median Image Node.
vx_node vxMedian3x3Node(
vx_graph graph,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuMedian3x3
[Immediate] Computes a median filter on the image by a 3x3 window.
vx_status vxuMedian3x3(
vx_context context,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output  The output image inVX_DF_IMAGE_U8
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.39. Min
Implements a pixelwise minimum kernel. The output image dimensions should be the same as the dimensions of the input image.
Performing a pixelwise minimum on a VX_DF_IMAGE_U8
or
VX_DF_IMAGE_S16
images.
All data types of the input and output images must match.

out[i,j] = (in1[i,j] < in2[i,j] ? in1[i,j] : in2[i,j])
Functions
3.39.1. Functions
vxMinNode
[Graph] Creates a pixelwise minimum kernel.
vx_node vxMinNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
graph  The reference to the graph where to create the node. 
[in]
in1  The first input image. Must be of typeVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
in2  The second input image. Must be of typeVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[out]
out  The output image which will hold the result of min and will have the same type and dimensions of the imput images.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuMin
[Immediate] Computes pixelwise minimum values between two images.
vx_status vxuMin(
vx_context context,
vx_image in1,
vx_image in2,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  The first input image. Must be of typeVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
in2  The second input image. Must be of typeVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[out]
out  The output image which will hold the result of min.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.40. Min, Max Location
Finds the minimum and maximum values in an image and a location for each.
If the input image has several minimums/maximums, the kernel returns all of them.
Functions
3.40.1. Functions
vxMinMaxLocNode
[Graph] Creates a min,max,loc node.
vx_node vxMinMaxLocNode(
vx_graph graph,
vx_image input,
vx_scalar minVal,
vx_scalar maxVal,
vx_array minLoc,
vx_array maxLoc,
vx_scalar minCount,
vx_scalar maxCount);
Parameters

[in]
graph  The reference to create the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[out]
minVal  The minimum value in the image, which corresponds to the type of the input. 
[out]
maxVal  The maximum value in the image, which corresponds to the type of the input. 
[out]
minLoc  [optional] The minimumVX_TYPE_COORDINATES2D
locations. If the input image has several minimums, the kernel will return up to the capacity of the array. 
[out]
maxLoc  [optional] The maximumVX_TYPE_COORDINATES2D
locations. If the input image has several maximums, the kernel will return up to the capacity of the array. 
[out]
minCount  [optional] The total number of detected minimums in image. Use aVX_TYPE_SIZE
scalar. 
[out]
maxCount  [optional] The total number of detected maximums in image. Use aVX_TYPE_SIZE
scalar.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuMinMaxLoc
[Immediate] Computes the minimum and maximum values of the image.
vx_status vxuMinMaxLoc(
vx_context context,
vx_image input,
vx_scalar minVal,
vx_scalar maxVal,
vx_array minLoc,
vx_array maxLoc,
vx_scalar minCount,
vx_scalar maxCount);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[out]
minVal  The minimum value in the image, which corresponds to the type of the input. 
[out]
maxVal  The maximum value in the image, which corresponds to the type of the input. 
[out]
minLoc  [optional] The minimumVX_TYPE_COORDINATES2D
locations. If the input image has several minimums, the kernel will return up to the capacity of the array. 
[out]
maxLoc  [optional] The maximumVX_TYPE_COORDINATES2D
locations. If the input image has several maximums, the kernel will return up to the capacity of the array. 
[out]
minCount  [optional] The total number of detected minimums in image. Use aVX_TYPE_SIZE
scalar. 
[out]
maxCount  [optional] The total number of detected maximums in image. Use aVX_TYPE_SIZE
scalar.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.41. Non Linear Filter
Computes a nonlinear filter over a window of the input image. The output image dimensions should be the same as the dimensions of the input image.
The attribute VX_CONTEXT_NONLINEAR_MAX_DIMENSION
enables the user to
query the largest nonlinear filter supported by the implementation of
vxNonLinearFilterNode.
The implementation must support all dimensions (height or width, not
necessarily the same) up to the value of this attribute.
The lowest value that must be supported for this attribute is 9.
Functions
3.41.1. Functions
vxNonLinearFilterNode
[Graph] Creates a Nonlinear Filter Node.
vx_node vxNonLinearFilterNode(
vx_graph graph,
vx_enum function,
vx_image input,
vx_matrix mask,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
function  The nonlinear filter function. Seevx_non_linear_filter_e
. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[in]
mask  The mask to be applied to the Nonlinear function.VX_MATRIX_ORIGIN
attribute is used to place the mask appropriately when computing the resulting image. SeevxCreateMatrixFromPattern
. 
[out]
output  The output image inVX_DF_IMAGE_U8
format, which must have the same dimensions as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuNonLinearFilter
[Immediate] Performs Nonlinear Filtering.
vx_status vxuNonLinearFilter(
vx_context context,
vx_enum function,
vx_image input,
vx_matrix mask,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
function  The nonlinear filter function. Seevx_non_linear_filter_e
. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[in]
mask  The mask to be applied to the nonlinear function.VX_MATRIX_ORIGIN
attribute is used to place the mask appropriately when computing the resulting image. SeevxCreateMatrixFromPattern
andvxCreateMatrixFromPatternAndOrigin
. 
[out]
output  The output image inVX_DF_IMAGE_U8
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.42. NonMaxima Suppression
Find local maxima in an image, or otherwise suppress pixels that are not local maxima.
The input to the NonMaxima Suppressor is either a VX_DF_IMAGE_U8
or
VX_DF_IMAGE_S16
image.
In the case of a VX_DF_IMAGE_S16
image, suppressed pixels shall take
the value of INT16_MIN
.
An optional mask image may be used to restrict the suppression to a regionofinterest. If a mask pixel is nonzero, then the associated pixel in the input is completely ignored and not considered during suppression; that is, it is not suppressed and not considered as part of any suppression window.
A pixel with coordinates (x,y) is kept if and only if it is greater than or equal to its top left neighbours; and greater than its bottom right neighbours. For example, for a window size of 3, P(x,y) is retained if the following condition holds:
Functions
3.42.1. Functions
vxNonMaxSuppressionNode
[Graph] Creates a NonMaxima Suppression node.
vx_node vxNonMaxSuppressionNode(
vx_graph graph,
vx_image input,
vx_image mask,
vx_int32 win_size,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[in]
mask  [optional] Constrict suppression to a ROI. The mask image is of typeVX_DF_IMAGE_U8
and must be the same dimensions as the input image. 
[in]
win_size  The size of window over which to perform the localized nonmaxima suppression. Must be odd, and less than or equal to the smallest dimension of the input image. 
[out]
output  The output image, of the same type and size as the input, that has been nonmaxima suppressed.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuNonMaxSuppression
[Immediate] Performs NonMaxima Suppression on an image, producing an image of the same type.
vx_status vxuNonMaxSuppression(
vx_context context,
vx_image input,
vx_image mask,
vx_int32 win_size,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format. 
[in]
mask  [optional] Constrict suppression to a ROI. The mask image is of typeVX_DF_IMAGE_U8
and must be the same dimensions as the input image. 
[in]
win_size  The size of window over which to perform the localized nonmaxima suppression. Must be odd, and less than or equal to the smallest dimension of the input image. 
[out]
output  The output image, of the same type as the input, that has been nonmaxima suppressed.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.43. Optical Flow Pyramid (LK)
Computes the optical flow using the LucasKanade method between two pyramid images.
The function is an implementation of the algorithm described in
[Bouguet2000].
The function inputs are two vx_pyramid
objects, old and new, along
with a vx_array
of vx_keypoint_t
structs to track from the old
vx_pyramid
.
Both pyramids old and new pyramids must have the same dimensionality.
VX_SCALE_PYRAMID_HALF
pyramidal scaling must be supported.
The function outputs a vx_array
of vx_keypoint_t
structs that
were tracked from the old vx_pyramid
to the new vx_pyramid
.
Each element in the vx_array
of vx_keypoint_t
structs in the new
array may be valid or not.
The implementation shall return the same number of vx_keypoint_t
structs in the new vx_array
that were in the older vx_array
.
In more detail: The LucasKanade method finds the affine motion vector V for each point in the old image tracking points array, using the following equation:
Where I_{x} and I_{y} are obtained using the Scharr gradients on the input image:
I_{t} is obtained by a simple difference between the same pixel in both images. I is defined as the adjacent pixels to the point p(x,y) under consideration. With a given window size of M, I is M^{2} points. The pixel p(x,y) is centered in the window. In practice, to get an accurate solution, it is necessary to iterate multiple times on this scheme (in a NewtonRaphson fashion) until:

the residual of the affine motion vector is smaller than a threshold

And/or maximum number of iteration achieved.
Each iteration, the estimation of the previous iteration is used by changing
I_{t} to be the difference between the old image and the pixel with the
estimated coordinates in the new image.
Each iteration the function checks if the pixel to track was lost.
The criteria for lost tracking is that the matrix above is invertible.
(The determinant of the matrix is less than a threshold : 10^{7}.) Or
the minimum eigenvalue of the matrix is smaller then a threshold
(10^{4}).
Also lost tracking happens when the point tracked coordinate is outside the
image coordinates.
When vx_true_e
is given as the input to use_initial_estimates, the
algorithm starts by calculating I_{t} as the difference between the old
image and the pixel with the initial estimated coordinates in the new image.
The input vx_array
of vx_keypoint_t
structs with
tracking_status set to zero (lost) are copied to the new vx_array
.
Clients are responsible for editing the output vx_array
of
vx_keypoint_t
structs array before applying it as the input
vx_array
of vx_keypoint_t
structs for the next frame.
For example, vx_keypoint_t
structs with tracking_status set to zero
may be removed by a client for efficiency.
This function changes just the x, y, and tracking_status members of
the vx_keypoint_t
structure and behaves as if it copied the rest from
the old tracking vx_keypoint_t
to new image vx_keypoint_t
.
Functions
3.43.1. Functions
vxOpticalFlowPyrLKNode
[Graph] Creates a Lucas Kanade Tracking Node.
vx_node vxOpticalFlowPyrLKNode(
vx_graph graph,
vx_pyramid old_images,
vx_pyramid new_images,
vx_array old_points,
vx_array new_points_estimates,
vx_array new_points,
vx_enum termination,
vx_scalar epsilon,
vx_scalar num_iterations,
vx_scalar use_initial_estimate,
vx_size window_dimension);
Parameters

[in]
graph  The reference to the graph. 
[in]
old_images  Input of first (old) image pyramid inVX_DF_IMAGE_U8
. 
[in]
new_images  Input of destination (new) image pyramidVX_DF_IMAGE_U8
. 
[in]
old_points  An array of key points in avx_array
ofVX_TYPE_KEYPOINT
; those key points are defined at the old_images high resolution pyramid. 
[in]
new_points_estimates  An array of estimation on what is the output key points in avx_array
ofVX_TYPE_KEYPOINT
; those keypoints are defined at the new_images high resolution pyramid. 
[out]
new_points  An output array of key points in avx_array
ofVX_TYPE_KEYPOINT
; those key points are defined at the new_images high resolution pyramid. 
[in]
termination  The termination can beVX_TERM_CRITERIA_ITERATIONS
orVX_TERM_CRITERIA_EPSILON
orVX_TERM_CRITERIA_BOTH
. 
[in]
epsilon  Thevx_float32
error for terminating the algorithm. 
[in]
num_iterations  The number of iterations. Use aVX_TYPE_UINT32
scalar. 
[in]
use_initial_estimate  Use aVX_TYPE_BOOL
scalar. 
[in]
window_dimension  The size of the window on which to perform the algorithm. SeeVX_CONTEXT_OPTICAL_FLOW_MAX_WINDOW_DIMENSION
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuOpticalFlowPyrLK
[Immediate] Computes an optical flow on two images.
vx_status vxuOpticalFlowPyrLK(
vx_context context,
vx_pyramid old_images,
vx_pyramid new_images,
vx_array old_points,
vx_array new_points_estimates,
vx_array new_points,
vx_enum termination,
vx_scalar epsilon,
vx_scalar num_iterations,
vx_scalar use_initial_estimate,
vx_size window_dimension);
Parameters

[in]
context  The reference to the overall context. 
[in]
old_images  Input of first (old) image pyramid inVX_DF_IMAGE_U8
. 
[in]
new_images  Input of destination (new) image pyramid inVX_DF_IMAGE_U8

[in]
old_points  an array of key points in a vx_array ofVX_TYPE_KEYPOINT
those key points are defined at the old_images high resolution pyramid 
[in]
new_points_estimates  an array of estimation on what is the output key points in avx_array
ofVX_TYPE_KEYPOINT
those keypoints are defined at the new_images high resolution pyramid 
[out]
new_points  an output array of key points in avx_array
ofVX_TYPE_KEYPOINT
those key points are defined at the new_images high resolution pyramid 
[in]
termination  termination can beVX_TERM_CRITERIA_ITERATIONS
orVX_TERM_CRITERIA_EPSILON
orVX_TERM_CRITERIA_BOTH

[in]
epsilon  is thevx_float32
error for terminating the algorithm 
[in]
num_iterations  is the number of iterations. Use aVX_TYPE_UINT32
scalar. 
[in]
use_initial_estimate  Can be set to eithervx_false_e
orvx_true_e
. 
[in]
window_dimension  The size of the window on which to perform the algorithm. SeeVX_CONTEXT_OPTICAL_FLOW_MAX_WINDOW_DIMENSION
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.44. Phase
Implements the Gradient Phase Computation Kernel. The output image dimensions should be the same as the dimensions of the input images.
This kernel takes two gradients in VX_DF_IMAGE_S16
format and computes
the angles for each pixel and stores this in a VX_DF_IMAGE_U8
image.
ϕ = tan^{1} (grad_y(x,y) / grad_x(x,y))
Where ϕ is then translated to 0 ≤ ϕ < 2 π. Each ϕ value is then mapped to the range 0 to 255 inclusive.
Functions
3.44.1. Functions
vxPhaseNode
[Graph] Creates a Phase node.
vx_node vxPhaseNode(
vx_graph graph,
vx_image grad_x,
vx_image grad_y,
vx_image orientation);
Parameters

[in]
graph  The reference to the graph. 
[in]
grad_x  The input x image. This must be inVX_DF_IMAGE_S16
format. 
[in]
grad_y  The input y image. This must be inVX_DF_IMAGE_S16
format. 
[out]
orientation  The phase image. This is inVX_DF_IMAGE_U8
format, and must have the same dimensions as the input images.
See also: VX_KERNEL_PHASE
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuPhase
[Immediate] Invokes an immediate Phase.
vx_status vxuPhase(
vx_context context,
vx_image grad_x,
vx_image grad_y,
vx_image orientation);
Parameters

[in]
context  The reference to the overall context. 
[in]
grad_x  The input x image. This must be inVX_DF_IMAGE_S16
format. 
[in]
grad_y  The input y image. This must be inVX_DF_IMAGE_S16
format. 
[out]
orientation  The phase image. This will be inVX_DF_IMAGE_U8
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.45. Pixelwise Multiplication
Performs elementwise multiplication between two images and a scalar value. The output image dimensions should be the same as the dimensions of the input images.
Pixelwise multiplication is performed between the pixel values in two
VX_DF_IMAGE_U8
or VX_DF_IMAGE_S16
images and a scalar
floatingpoint number scale.
The output image can be VX_DF_IMAGE_U8
only if both source images are
VX_DF_IMAGE_U8
and the output image is explicitly set to
VX_DF_IMAGE_U8
.
It is otherwise VX_DF_IMAGE_S16
.
If one of the input images is of type VX_DF_IMAGE_S16
, all values are
converted to VX_DF_IMAGE_S16
.
The scale with a value of 1 / 2^{n}, where n is an integer and
0 ≤ n ≤ 15, and 1/255 (0x1.010102p8 C99 float hex) must be
supported.
The support for other values of scale is not prohibited.
Furthermore, for scale with a value of 1/255 the rounding policy of
VX_ROUND_POLICY_TO_NEAREST_EVEN
must be supported whereas for the
scale with value of \(\frac{1}{2^n}\) the rounding policy of
VX_ROUND_POLICY_TO_ZERO
must be supported.
The support of other rounding modes for any values of scale is not
prohibited.
The rounding policy VX_ROUND_POLICY_TO_ZERO
for this function is
defined as:

reference(x,y,scale) = truncate( ( (int32_t)in_{1}(x,y)) × ( (int32_t)in_{2}(x,y)) × (double)scale)
The rounding policy VX_ROUND_POLICY_TO_NEAREST_EVEN
for this function
is defined as:

reference(x,y,scale) = round_to_nearest_even( ( (int32_t)in_{1}(x,y)) × ( (int32_t)in_{2}(x,y)) × (double)scale)
The overflow handling is controlled by an overflowpolicy parameter. For each pixel value in the two input images:

out(x,y) = in_{1}(x,y) × in_{2}(x,y) × scale
Functions
3.45.1. Functions
vxMultiplyNode
[Graph] Creates an pixelwisemultiplication node.
vx_node vxMultiplyNode(
vx_graph graph,
vx_image in1,
vx_image in2,
vx_scalar scale,
vx_enum overflow_policy,
vx_enum rounding_policy,
vx_image out);
Parameters

[in]
graph  The reference to the graph. 
[in]
in1  An input image,VX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
in2  An input image,VX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
scale  A nonnegativeVX_TYPE_FLOAT32
multiplied to each product before overflow handling. 
[in]
overflow_policy  AVX_TYPE_ENUM
of thevx_convert_policy_e
enumeration. 
[in]
rounding_policy  AVX_TYPE_ENUM
of thevx_round_policy_e
enumeration. 
[out]
out  The output image, aVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
image. Must have the same type and dimensions of the imput images.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuMultiply
[Immediate] Performs elementwise multiplications on pixel values in the input images and a scale.
vx_status vxuMultiply(
vx_context context,
vx_image in1,
vx_image in2,
vx_float32 scale,
vx_enum overflow_policy,
vx_enum rounding_policy,
vx_image out);
Parameters

[in]
context  The reference to the overall context. 
[in]
in1  AVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
input image. 
[in]
in2  AVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
input image. 
[in]
scale  A nonnegativeVX_TYPE_FLOAT32
multiplied to each product before overflow handling. 
[in]
overflow_policy  Avx_convert_policy_e
enumeration. 
[in]
rounding_policy  Avx_round_policy_e
enumeration. 
[out]
out  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.46. Reconstruction from a Laplacian Image Pyramid
Reconstructs the original image from a Laplacian Image Pyramid.
This vision function reconstructs the image of the highest possible resolution from a Laplacian pyramid. The upscaled input image is added to the last level of the Laplacian pyramid L_{N1}:

I_{N1} = UpSample(input) + L_{N1}
For the definition of the UpSample function please see
vxLaplacianPyramidNode
.
Correspondingly, for each pyramid level i = 0 … N2:

I_{i} = UpSample(I_{i+1}) + L_{i}
Finally, the output image is:

output = I_{0}
Functions
3.46.1. Functions
vxLaplacianReconstructNode
[Graph] Reconstructs an image from a Laplacian Image pyramid.
vx_node vxLaplacianReconstructNode(
vx_graph graph,
vx_pyramid laplacian,
vx_image input,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
laplacian  The Laplacian pyramid withVX_DF_IMAGE_S16
format. 
[in]
input  The lowest resolution image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format for the Laplacian pyramid. 
[out]
output  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format with the highest possible resolution reconstructed from the Laplacian pyramid. The output image format should be same as input image format.
See also: Object: Pyramid
Returns: vx_node
.
Return Values

0  Node could not be created.

*  Node handle.
vxuLaplacianReconstruct
[Immediate] Reconstructs an image from a Laplacian Image pyramid.
vx_status vxuLaplacianReconstruct(
vx_context context,
vx_pyramid laplacian,
vx_image input,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
laplacian  The Laplacian pyramid withVX_DF_IMAGE_S16
format. 
[in]
input  The lowest resolution image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format for the Laplacian pyramid. 
[out]
output  The output image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
format with the highest possible resolution reconstructed from the Laplacian pyramid. The output image format should be same as input image format.
See also: Object: Pyramid
Returns: A vx_status
enumeration.
Return Values

VX_SUCCESS
 Success. 
*  An error occured. See
vx_status_e
3.47. Remap
Maps output pixels in an image from input pixels in an image.
Remap takes a remap table object vx_remap
to map a set of output
pixels back to source input pixels.
A remap is typically defined as:

output(x,y) = input(mapx(x,y),mapy(x,y))
for every (x,y) in the destination image
However, the mapping functions are contained in the vx_remap
object.
Functions
3.47.1. Functions
vxRemapNode
[Graph] Creates a Remap Node.
vx_node vxRemapNode(
vx_graph graph,
vx_image input,
vx_remap table,
vx_enum policy,
vx_image output);
Parameters

[in]
graph  The reference to the graph that will contain the node. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
table  The remap table object. 
[in]
policy  An interpolation type fromvx_interpolation_type_e
.VX_INTERPOLATION_AREA
is not supported. 
[out]
output  The outputVX_DF_IMAGE_U8
image with the same dimensions as the input image.
Note
The border modes 
Returns: A vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuRemap
[Immediate] Remaps an output image from an input image.
vx_status vxuRemap(
vx_context context,
vx_image input,
vx_remap table,
vx_enum policy,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
table  The remap table object. 
[in]
policy  The interpolation policy fromvx_interpolation_type_e
.VX_INTERPOLATION_AREA
is not supported. 
[out]
output  The outputVX_DF_IMAGE_U8
image.
Returns: A vx_status_e
enumeration.
3.48. Scale Image
Implements the Image Resizing Kernel.
This kernel resizes an image from the source to the destination dimensions. The supported interpolation types are currently:
The sample positions used to determine output pixel values are generated by
scaling the outside edges of the source image pixels to the outside edges of
the destination image pixels.
As described in the documentation for vx_interpolation_type_e
, samples
are taken at pixel centers.
This means that, unless the scale is 1:1, the sample position for the top
left destination pixel typically does not fall exactly on the top left
source pixel but will be generated by interpolation.
That is, the sample positions corresponding in source and destination are defined by the following equations:

x_{input} = ( (x_{output} + 0.5) × (width_{input} / width_{output}))  0.5

y_{input} = ( (y_{output} + 0.5) × (height_{input} / height_{output}))  0.5

x_{output} = ( (x_{input} + 0.5) × (width_{output} / width_{input}))  0.5

y_{output} = ( (y_{input} + 0.5) × (height_{output} / height_{input}))  0.5

For
VX_INTERPOLATION_NEAREST_NEIGHBOR
, the output value is that of the pixel whose centre is closest to the sample point. 
For
VX_INTERPOLATION_BILINEAR
, the output value is formed by a weighted average of the nearest source pixels to the sample point. That is:
x_{lower} = floor(x_{input})

y_{lower} = floor(y_{input})

s = x_{input}  x_{lower}

t = y_{input}  y_{lower}

output(x_{input},y_{input}) = (1s)(1t) × input(x_{lower},y_{lower}) + s(1t) × input(x_{lower}+1,y_{lower}) + (1s)t × input(x_{lower},y_{lower}+1) + s × t × input(x_{lower}+1,y_{lower}+1)


For
VX_INTERPOLATION_AREA
, the implementation is expected to generate each output pixel by sampling all the source pixels that are at least partly covered by the area bounded by:

and
The details of this sampling method are implementationdefined. The implementation should perform enough sampling to avoid aliasing, but there is no requirement that the sample areas for adjacent output pixels be disjoint, nor that the pixels be weighted evenly.
The above diagram shows three sampling methods used to shrink a 7x3 image to 3x1.
The topmost image pair shows nearestneighbor sampling, with crosses on the left image marking the sample positions in the source that are used to generate the output image on the right. As the pixel centre closest to the sample position is white in all cases, the resulting 3x1 image is white.
The middle image pair shows bilinear sampling, with black squares on the left image showing the region in the source being sampled to generate each pixel on the destination image on the right. This sample area is always the size of an input pixel. The outer destination pixels partly sample from the outermost green pixels, so their resulting value is a weighted average of white and green.
The bottom image pair shows area sampling. The black rectangles in the source image on the left show the bounds of the projection of the destination pixels onto the source. The destination pixels on the right are formed by averaging at least those source pixels whose areas are wholly or partly contained within those rectangles. The manner of this averaging is implementationdefined; the example shown here weights the contribution of each source pixel by the amount of that pixel’s area contained within the black rectangle.
Functions
3.48.1. Functions
vxHalfScaleGaussianNode
[Graph] Performs a Gaussian Blur on an image then halfscales it. The interpolation mode used is nearestneighbor.
vx_node vxHalfScaleGaussianNode(
vx_graph graph,
vx_image input,
vx_image output,
vx_int32 kernel_size);
The output image size is determined by:

W_{output} = (W_{input} + 1) / 2

H_{output} = (H_{input} + 1) / 2
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[out]
output  The outputVX_DF_IMAGE_U8
image. 
[in]
kernel_size  The input size of the Gaussian filter. Supported values are 1, 3 and 5.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxScaleImageNode
[Graph] Creates a Scale Image Node.
vx_node vxScaleImageNode(
vx_graph graph,
vx_image src,
vx_image dst,
vx_enum type);
Parameters

[in]
graph  The reference to the graph. 
[in]
src  The source image of typeVX_DF_IMAGE_U8
. 
[out]
dst  The destination image of typeVX_DF_IMAGE_U8
. 
[in]
type  The interpolation type to use.
See also: vx_interpolation_type_e
.
Note
The destination image must have a defined size and format.
The border modes 
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuHalfScaleGaussian
[Immediate] Performs a Gaussian Blur on an image then halfscales it. The interpolation mode used is nearestneighbor.
vx_status vxuHalfScaleGaussian(
vx_context context,
vx_image input,
vx_image output,
vx_int32 kernel_size);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[out]
output  The outputVX_DF_IMAGE_U8
image. 
[in]
kernel_size  The input size of the Gaussian filter. Supported values are 1, 3 and 5.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
vxuScaleImage
[Immediate] Scales an input image to an output image.
vx_status vxuScaleImage(
vx_context context,
vx_image src,
vx_image dst,
vx_enum type);
Parameters

[in]
context  The reference to the overall context. 
[in]
src  The source image of typeVX_DF_IMAGE_U8
. 
[out]
dst  The destintation image of typeVX_DF_IMAGE_U8
. 
[in]
type  The interpolation type.
See also: vx_interpolation_type_e
.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.49. Sobel 3x3
Implements the Sobel Image Filter Kernel. The output images dimensions should be the same as the dimensions of the input image.
This kernel produces two output planes (one can be omitted) in the x and y plane. The Sobel Operators G_{x}, G_{y} are defined as:
Functions
3.49.1. Functions
vxSobel3x3Node
[Graph] Creates a Sobel3x3 node.
vx_node vxSobel3x3Node(
vx_graph graph,
vx_image input,
vx_image output_x,
vx_image output_y);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output_x  [optional] The output gradient in the x direction inVX_DF_IMAGE_S16
. Must have the same dimensions as the input image. 
[out]
output_y  [optional] The output gradient in the y direction inVX_DF_IMAGE_S16
. Must have the same dimensions as the input image.
See also: VX_KERNEL_SOBEL_3x3
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuSobel3x3
[Immediate] Invokes an immediate Sobel 3x3.
vx_status vxuSobel3x3(
vx_context context,
vx_image input,
vx_image output_x,
vx_image output_y);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
format. 
[out]
output_x  [optional] The output gradient in the x direction inVX_DF_IMAGE_S16
. 
[out]
output_y  [optional] The output gradient in the y direction inVX_DF_IMAGE_S16
.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.50. TableLookup
Implements the Table Lookup Image Kernel. The output image dimensions should be the same as the dimensions of the input image.
This kernel uses each pixel in an image to index into a LUT and put the
indexed LUT value into the output image.
The formats supported are VX_DF_IMAGE_U8
and VX_DF_IMAGE_S16
.
Functions
3.50.1. Functions
vxTableLookupNode
[Graph] Creates a Table Lookup node. If a value from the input image is not present in the lookup table, the result is undefined.
vx_node vxTableLookupNode(
vx_graph graph,
vx_image input,
vx_lut lut,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
lut  The LUT which is of typeVX_TYPE_UINT8
if input image isVX_DF_IMAGE_U8
orVX_TYPE_INT16
if input image isVX_DF_IMAGE_S16
. 
[out]
output  The output image of the same type and size as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
.
vxuTableLookup
[Immediate] Processes the image through the LUT.
vx_status vxuTableLookup(
vx_context context,
vx_image input,
vx_lut lut,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image inVX_DF_IMAGE_U8
orVX_DF_IMAGE_S16
. 
[in]
lut  The LUT which is of typeVX_TYPE_UINT8
if input image isVX_DF_IMAGE_U8
orVX_TYPE_INT16
if input image isVX_DF_IMAGE_S16
. 
[out]
output  The output image of the same type as the input image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.51. Tensor Add
Performs arithmetic addition on element values in the input tensor data.
Functions
3.51.1. Functions
vxTensorAddNode
[Graph] Performs arithmetic addition on element values in the input tensor data.
vx_node vxTensorAddNode(
vx_graph graph,
vx_tensor input1,
vx_tensor input2,
vx_enum policy,
vx_tensor output);
Parameters

[in]
graph  The handle to the graph. 
[in]
input1  Input tensor data. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[in]
input2  Input tensor data. The dimensions and sizes of input2 match those of input1, unless thevx_tensor
of one or more dimensions in input2 is 1. In this case, those dimensions are treated as if this tensor was expanded to match the size of the corresponding dimension of input1, and data was duplicated on all terms in that dimension. After this expansion, the dimensions will be equal. The data type must match the data type of input1. 
[in]
policy  Avx_convert_policy_e
enumeration. 
[out]
output  The output tensor data with the same dimensions as the input tensor data.
Returns: vx_node
.
Returns: A node reference vx_node
.
Any possible errors preventing a successful creation should be checked using
vxGetStatus
.
vxuTensorAdd
[Immediate] Performs arithmetic addition on element values in the input tensor data.
vx_status vxuTensorAdd(
vx_context context,
vx_tensor input1,
vx_tensor input2,
vx_enum policy,
vx_tensor output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input1  Input tensor data. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[in]
input2  Input tensor data. The dimensions and sizes of input2 match those of input1, unless thevx_tensor
of one or more dimensions in input2 is 1. In this case, those dimensions are treated as if this tensor was expanded to match the size of the corresponding dimension of input1, and data was duplicated on all terms in that dimension. After this expansion, the dimensions will be equal. The data type must match the data type of input1. 
[in]
policy  Avx_convert_policy_e
enumeration. 
[out]
output  The output tensor data with the same dimensions as the input tensor data.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.52. Tensor Convert BitDepth
Creates a bitdepth conversion node.
Convert tensor from a specific data type and fixed point position to another data type and fixed point position. The equation for the conversion is as follows:

\(output = \frac{\left(\frac{input}{2^{input\_fixed\_point\_position}}offset\right)}{norm} \times 2^{output\_fixed\_point\_position}\)
Where offset and norm are the input parameters in vx_float32
.
input_fixed_point_position and output_fixed_point_position are the fixed
point positions of the input and output respectivly.
Is case input or output tensors are of VX_TYPE_FLOAT32
fixed point
position 0 is used.
Functions
3.52.1. Functions
vxTensorConvertDepthNode
[Graph] Creates a bitdepth conversion node.
vx_node vxTensorConvertDepthNode(
vx_graph graph,
vx_tensor input,
vx_enum policy,
vx_scalar norm,
vx_scalar offset,
vx_tensor output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The input tensor. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[in]
policy  AVX_TYPE_ENUM
of thevx_convert_policy_e
enumeration. 
[in]
norm  A scalar containing aVX_TYPE_FLOAT32
of the normalization value. 
[in]
offset  A scalar containing aVX_TYPE_FLOAT32
of the offset value subtracted before normalization. 
[out]
output  The output tensor. Implementations must support input tensor data typeVX_TYPE_INT16
. with fixed_point_position 8. AndVX_TYPE_UINT8
with fixed_point_position 0.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuTensorConvertDepth
[Immediate] Performs a bitdepth conversion.
vx_status vxuTensorConvertDepth(
vx_context context,
vx_tensor input,
vx_enum policy,
vx_scalar norm,
vx_scalar offset,
vx_tensor output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input tensor. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[in]
policy  AVX_TYPE_ENUM
of thevx_convert_policy_e
enumeration. 
[in]
norm  A scalar containing aVX_TYPE_FLOAT32
of the normalization value. 
[in]
offset  A scalar containing aVX_TYPE_FLOAT32
of the offset value subtracted before normalization. 
[out]
output  The output tensor. Implementations must support input tensor data typeVX_TYPE_INT16
. with fixed_point_position 8. AndVX_TYPE_UINT8
with fixed_point_position 0.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.53. Tensor Matrix Multiply
Creates a generalized matrix multiplication node.
Performs:

output = T1(input1) T2(input2)) + T3(input3)
Where matrix multiplication is defined as:
where i,j are indexes from 1 to N,L respectively. C matrix is of size NxL. A matrix is of size NxM and B matrix is of size MxL. For signed integers, a fixed point calculation is performed with round, truncate and saturate according to the number of accumulator bits. round: rounding to nearest on the fractional part. truncate: at every multiplication result of 32bit is truncated after rounding. saturate: a saturation if performed on the accumulation and after the truncation, meaning no saturation is performed on the multiplication result.
Data Structures
Functions
3.53.1. Data Structures
vx_tensor_matrix_multiply_params_t
Matrix Multiply Parameters.
typedef struct _vx_tensor_matrix_multiply_params_t {
vx_bool transpose_input1;
vx_bool transpose_input2;
vx_bool transpose_input3;
} vx_tensor_matrix_multiply_params_t;

transpose_input1, transpose_input2, transpose_input3  if True, the corresponding matrix is transposed before the operation, otherwise the matrix is used as is.
3.53.2. Functions
vxTensorMatrixMultiplyNode
[Graph] Creates a generalized matrix multiplication node.
vx_node vxTensorMatrixMultiplyNode(
vx_graph graph,
vx_tensor input1,
vx_tensor input2,
vx_tensor input3,
const vx_tensor_matrix_multiply_params_t* matrix_multiply_params,
vx_tensor output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input1  The first input 2D tensor of typeVX_TYPE_INT16
with fixed_point_pos 8, or tensor data typesVX_TYPE_UINT8
orVX_TYPE_INT8
, with fixed_point_pos 0. 
[in]
input2  The second 2D tensor. Must be in the same data type as input1. 
[in]
input3  The third 2D tensor. Must be in the same data type as input1. [optional]. 
[in]
matrix_multiply_params  Matrix multiply parameters, seevx_tensor_matrix_multiply_params_t
. 
[out]
output  The output 2D tensor. Must be in the same data type as input1. Output dimension must agree the formula in the description.
Returns: vx_node
.
Returns: A node reference vx_node
.
Any possible errors preventing a successful creation should be checked using
vxGetStatus
.
vxuTensorMatrixMultiply
[Immediate] Performs a generalized matrix multiplication.
vx_status vxuTensorMatrixMultiply(
vx_context context,
vx_tensor input1,
vx_tensor input2,
vx_tensor input3,
const vx_tensor_matrix_multiply_params_t* matrix_multiply_params,
vx_tensor output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input1  The first input 2D tensor of typeVX_TYPE_INT16
with fixed_point_pos 8, or tensor data typesVX_TYPE_UINT8
orVX_TYPE_INT8
, with fixed_point_pos 0. 
[in]
input2  The second 2D tensor. Must be in the same data type as input1. 
[in]
input3  The third 2D tensor. Must be in the same data type as input1. [optional]. 
[in]
matrix_multiply_params  Matrix multiply parameters, seevx_tensor_matrix_multiply_params_t
. 
[out]
output  The output 2D tensor. Must be in the same data type as input1. Output dimension must agree the formula in the description.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.54. Tensor Multiply
Performs element wise multiplications on element values in the input tensor data with a scale.
Pixelwise multiplication is performed between the pixel values in two
tensors and a scalar floatingpoint number scale.
The scale with a value of 1 / 2^{n}, where n is an integer and 0
≤ n ≤ 15, and 1/255 (0x1.010102p8 C99 float hex) must be
supported.
The support for other values of scale is not prohibited.
Furthermore, for scale with a value of 1/255 the rounding policy of
VX_ROUND_POLICY_TO_NEAREST_EVEN
must be supported whereas for the
scale with value of 1 / 2^{n} the rounding policy of
VX_ROUND_POLICY_TO_ZERO
must be supported.
The support of other rounding modes for any values of scale is not
prohibited.
Functions
3.54.1. Functions
vxTensorMultiplyNode
[Graph] Performs element wise multiplications on element values in the input tensor data with a scale.
vx_node vxTensorMultiplyNode(
vx_graph graph,
vx_tensor input1,
vx_tensor input2,
vx_scalar scale,
vx_enum overflow_policy,
vx_enum rounding_policy,
vx_tensor output);
Parameters

[in]
graph  The handle to the graph. 
[in]
input1  Input tensor data. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[in]
input2  Input tensor data. The dimensions and sizes of input2 match those of input1, unless thevx_tensor
of one or more dimensions in input2 is 1. In this case, those dimensions are treated as if this tensor was expanded to match the size of the corresponding dimension of input1, and data was duplicated on all terms in that dimension. After this expansion, the dimensions will be equal. The data type must match the data type of input1. 
[in]
scale  A nonnegativeVX_TYPE_FLOAT32
multiplied to each product before overflow handling. 
[in]
overflow_policy  Avx_convert_policy_e
enumeration. 
[in]
rounding_policy  Avx_round_policy_e
enumeration. 
[out]
output  The output tensor data with the same dimensions as the input tensor data.
Returns: vx_node
.
Returns: A node reference vx_node
.
Any possible errors preventing a successful creation should be checked using
vxGetStatus
.
vxuTensorMultiply
[Immediate] Performs element wise multiplications on element values in the input tensor data with a scale.
vx_status vxuTensorMultiply(
vx_context context,
vx_tensor input1,
vx_tensor input2,
vx_scalar scale,
vx_enum overflow_policy,
vx_enum rounding_policy,
vx_tensor output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input1  Input tensor data. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[in]
input2  Input tensor data. The dimensions and sizes of input2 match those of input1, unless thevx_tensor
of one or more dimensions in input2 is 1. In this case, those dimensions are treated as if this tensor was expanded to match the size of the corresponding dimension of input1, and data was duplicated on all terms in that dimension. After this expansion, the dimensions will be equal. The data type must match the data type of input1. 
[in]
scale  A nonnegativeVX_TYPE_FLOAT32
multiplied to each product before overflow handling. 
[in]
overflow_policy  Avx_convert_policy_e
enumeration. 
[in]
rounding_policy  Avx_round_policy_e
enumeration. 
[out]
output  The output tensor data with the same dimensions as the input tensor data.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.55. Tensor Subtract
Performs arithmetic subtraction on element values in the input tensor data.
Functions
3.55.1. Functions
vxTensorSubtractNode
[Graph] Performs arithmetic subtraction on element values in the input tensor data.
vx_node vxTensorSubtractNode(
vx_graph graph,
vx_tensor input1,
vx_tensor input2,
vx_enum policy,
vx_tensor output);
Parameters

[in]
graph  The handle to the graph. 
[in]
input1  Input tensor data. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[in]
input2  Input tensor data. The dimensions and sizes of input2 match those of input1, unless thevx_tensor
of one or more dimensions in input2 is 1. In this case, those dimensions are treated as if this tensor was expanded to match the size of the corresponding dimension of input1, and data was duplicated on all terms in that dimension. After this expansion, the dimensions will be equal. The data type must match the data type of input1. 
[in]
policy  Avx_convert_policy_e
enumeration. 
[out]
output  The output tensor data with the same dimensions as the input tensor data.
Returns: vx_node
.
Returns: A node reference vx_node
.
Any possible errors preventing a successful creation should be checked using
vxGetStatus
.
vxuTensorSubtract
[Immediate] Performs arithmetic subtraction on element values in the input tensor data.
vx_status vxuTensorSubtract(
vx_context context,
vx_tensor input1,
vx_tensor input2,
vx_enum policy,
vx_tensor output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input1  Input tensor data. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[in]
input2  Input tensor data. The dimensions and sizes of input2 match those of input1, unless thevx_tensor
of one or more dimensions in input2 is 1. In this case, those dimensions are treated as if this tensor was expanded to match the size of the corresponding dimension of input1, and data was duplicated on all terms in that dimension. After this expansion, the dimensions will be equal. The data type must match the data type of input1. 
[in]
policy  Avx_convert_policy_e
enumeration. 
[out]
output  The output tensor data with the same dimensions as the input tensor data.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.56. Tensor TableLookUp
Performs LUT on element values in the input tensor data.
This kernel uses each element in a tensor to index into a LUT and put the
indexed LUT value into the output tensor.
The tensor types supported are VX_TYPE_UINT8
and VX_TYPE_INT16
.
Signed inputs are cast to unsigned before used as input indexes to the LUT.
Functions
3.56.1. Functions
vxTensorTableLookupNode
[Graph] Performs LUT on element values in the input tensor data.
vx_node vxTensorTableLookupNode(
vx_graph graph,
vx_tensor input1,
vx_lut lut,
vx_tensor output);
Parameters

[in]
graph  The handle to the graph. 
[in]
input1  Input tensor data. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
, with fixed_point_position 0. 
[in]
lut  The lookup table to use, of typevx_lut
. The elements of input1 are treated as unsigned integers to determine an index into the lookup table. The data type of the items in the lookup table must match that of the output tensor. 
[out]
output  The output tensor data with the same dimensions as the input tensor data.
Returns: vx_node
.
Returns: A node reference vx_node
.
Any possible errors preventing a successful creation should be checked using
vxGetStatus
.
vxuTensorTableLookup
[Immediate] Performs LUT on element values in the input tensor data.
vx_status vxuTensorTableLookup(
vx_context context,
vx_tensor input1,
vx_lut lut,
vx_tensor output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input1  Input tensor data. Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
, with fixed_point_position 0. 
[in]
lut  The lookup table to use, of typevx_lut
. The elements of input1 are treated as unsigned integers to determine an index into the lookup table. The data type of the items in the lookup table must match that of the output tensor. 
[out]
output  The output tensor data with the same dimensions as the input tensor data.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.57. Tensor Transpose
Performs transpose on the input tensor.
Functions
3.57.1. Functions
vxTensorTransposeNode
[Graph] Performs transpose on the input tensor. The node transpose the tensor according to a specified 2 indexes in the tensor (0based indexing)
vx_node vxTensorTransposeNode(
vx_graph graph,
vx_tensor input,
vx_tensor output,
vx_size dimension1,
vx_size dimension2);
Parameters

[in]
graph  The handle to the graph. 
[in]
input  Input tensor data, Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[out]
output  output tensor data, 
[in]
dimension1  Dimension index that is transposed with dim 2. 
[in]
dimension2  Dimension index that is transposed with dim 1.
Returns: vx_node
.
Returns: A node reference vx_node
.
Any possible errors preventing a successful creation should be checked using
vxGetStatus
.
vxuTensorTranspose
[Immediate] Performs transpose on the input tensor. The tensor is transposed according to a specified 2 indexes in the tensor (0based indexing)
vx_status vxuTensorTranspose(
vx_context context,
vx_tensor input,
vx_tensor output,
vx_size dimension1,
vx_size dimension2);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  Input tensor data, Implementations must support input tensor data typeVX_TYPE_INT16
with fixed_point_position 8, and tensor data typesVX_TYPE_UINT8
andVX_TYPE_INT8
, with fixed_point_position 0. 
[out]
output  output tensor data, 
[in]
dimension1  Dimension index that is transposed with dim 2. 
[in]
dimension2  Dimension index that is transposed with dim 1.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.58. Thresholding
Thresholds an input image and produces an output Boolean image. The output image dimensions should be the same as the dimensions of the input image.
In VX_THRESHOLD_TYPE_BINARY
, the output is determined by:
In VX_THRESHOLD_TYPE_RANGE
, the output is determined by:
Where 'false value' and 'true value' are defined by the of the thresh
parameter dependent upon the threshold output format with default values as
discussed in the description of vxCreateThresholdForImage
or as set by
a call to vxCopyThresholdOutput
with the thresh parameter as the
first argument.
Functions
3.58.1. Functions
vxThresholdNode
[Graph] Creates a Threshold node and returns a reference to it.
vx_node vxThresholdNode(
vx_graph graph,
vx_image input,
vx_threshold thresh,
vx_image output);
Parameters

[in]
graph  The reference to the graph in which the node is created. 
[in]
input  The input image. Only images with formatVX_DF_IMAGE_U8
andVX_DF_IMAGE_S16
are supported. 
[in]
thresh  The thresholding object that defines the parameters of the operation. TheVX_THRESHOLD_INPUT_FORMAT
must be the same as the input image format and theVX_THRESHOLD_OUTPUT_FORMAT
must be the same as the output image format. 
[out]
output  The output image, that will contain as pixel value true and false values defined by thresh. Only images with formatVX_DF_IMAGE_U8
are supported. The dimensions are the same as the input image.
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuThreshold
[Immediate] Threshold’s an input image and produces a VX_DF_IMAGE_U8
boolean image.
vx_status vxuThreshold(
vx_context context,
vx_image input,
vx_threshold thresh,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The input image. Only images with formatVX_DF_IMAGE_U8
andVX_DF_IMAGE_S16
are supported. 
[in]
thresh  The thresholding object that defines the parameters of the operation. TheVX_THRESHOLD_INPUT_FORMAT
must be the same as the input image format and theVX_THRESHOLD_OUTPUT_FORMAT
must be the same as the output image format. 
[out]
output  The output image, that will contain as pixel value true and false values defined by thresh. Only images with formatVX_DF_IMAGE_U8
are supported.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.59. Warp Affine
Performs an affine transform on an image.
This kernel performs an affine transform with a 2x3 Matrix M with this method of pixel coordinate translation:
This translates into the C declaration:
// x0 = a x + b y + c;
// y0 = d x + e y + f;
vx_float32 mat[3][2] = {
{a, d}, // 'x' coefficients
{b, e}, // 'y' coefficients
{c, f}, // 'offsets'
};
vx_matrix matrix = vxCreateMatrix(context, VX_TYPE_FLOAT32, 2, 3);
vxCopyMatrix(matrix, mat, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
Functions
3.59.1. Functions
vxWarpAffineNode
[Graph] Creates an Affine Warp Node.
vx_node vxWarpAffineNode(
vx_graph graph,
vx_image input,
vx_matrix matrix,
vx_enum type,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
matrix  The affine matrix. Must be 2x3 of typeVX_TYPE_FLOAT32
. 
[in]
type  The interpolation type fromvx_interpolation_type_e
.VX_INTERPOLATION_AREA
is not supported. 
[out]
output  The outputVX_DF_IMAGE_U8
image and the same dimensions as the input image.
Note
The border modes 
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuWarpAffine
[Immediate] Performs an Affine warp on an image.
vx_status vxuWarpAffine(
vx_context context,
vx_image input,
vx_matrix matrix,
vx_enum type,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
matrix  The affine matrix. Must be 2x3 of typeVX_TYPE_FLOAT32
. 
[in]
type  The interpolation type fromvx_interpolation_type_e
.VX_INTERPOLATION_AREA
is not supported. 
[out]
output  The outputVX_DF_IMAGE_U8
image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
3.60. Warp Perspective
Performs a perspective transform on an image.
This kernel performs an perspective transform with a 3x3 Matrix M with this method of pixel coordinate translation:
This translates into the C declaration:
// x0 = a x + b y + c;
// y0 = d x + e y + f;
// z0 = g x + h y + i;
vx_float32 mat[3][3] = {
{a, d, g}, // 'x' coefficients
{b, e, h}, // 'y' coefficients
{c, f, i}, // 'offsets'
};
vx_matrix matrix = vxCreateMatrix(context, VX_TYPE_FLOAT32, 3, 3);
vxCopyMatrix(matrix, mat, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
Functions
3.60.1. Functions
vxWarpPerspectiveNode
[Graph] Creates a Perspective Warp Node.
vx_node vxWarpPerspectiveNode(
vx_graph graph,
vx_image input,
vx_matrix matrix,
vx_enum type,
vx_image output);
Parameters

[in]
graph  The reference to the graph. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
matrix  The perspective matrix. Must be 3x3 of typeVX_TYPE_FLOAT32
. 
[in]
type  The interpolation type fromvx_interpolation_type_e
.VX_INTERPOLATION_AREA
is not supported. 
[out]
output  The outputVX_DF_IMAGE_U8
image with the same dimensions as the input image.
Note
The border modes 
Returns: vx_node
.
Return Values

vx_node
 A node reference. Any possible errors preventing a successful creation should be checked usingvxGetStatus
vxuWarpPerspective
[Immediate] Performs an Perspective warp on an image.
vx_status vxuWarpPerspective(
vx_context context,
vx_image input,
vx_matrix matrix,
vx_enum type,
vx_image output);
Parameters

[in]
context  The reference to the overall context. 
[in]
input  The inputVX_DF_IMAGE_U8
image. 
[in]
matrix  The perspective matrix. Must be 3x3 of typeVX_TYPE_FLOAT32
. 
[in]
type  The interpolation type fromvx_interpolation_type_e
.VX_INTERPOLATION_AREA
is not supported. 
[out]
output  The outputVX_DF_IMAGE_U8
image.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Success 
*  An error occurred. See
vx_status_e
.
4. Basic Features
The basic parts of OpenVX needed for computation.
Types in OpenVX intended to be derived from the C99 Section 7.18 standard definition of fixed width types.
Modules
Data Structures
Macros
Typedefs
Enumerations
Functions
4.1. Data Structures
4.1.1. vx_coordinates2d_t
The 2D Coordinates structure.
typedef struct _vx_coordinates2d_t {
vx_uint32 x;
vx_uint32 y;
} vx_coordinates2d_t;

x  the X coordinate.

y  the Y coordinate.
4.1.2. vx_coordinates2df_t
The floatingpoint 2D Coordinates structure.
typedef struct _vx_coordinates2df_t {
vx_float32 x;
vx_float32 y;
} vx_coordinates2df_t;

x  the X coordinate.

y  the Y coordinate.
4.1.3. vx_coordinates3d_t
The 3D Coordinates structure.
typedef struct _vx_coordinates3d_t {
vx_uint32 x;
vx_uint32 y;
vx_uint32 z;
} vx_coordinates3d_t;

x  the X coordinate.

y  the Y coordinate.

z  the Z coordinate
4.1.4. vx_keypoint_t
The keypoint data structure.
typedef struct _vx_keypoint_t {
vx_int32 x;
vx_int32 y;
vx_float32 strength;
vx_float32 scale;
vx_float32 orientation;
vx_int32 tracking_status;
vx_float32 error;
} vx_keypoint_t;

x  The x coordinate.

y  The y coordinate.

strength  The strength of the keypoint. Its definition is specific to the corner detector.

scale  Initialized to 0 by corner detectors.

orientation  Initialized to 0 by corner detectors.

tracking_status  A zero indicates a lost point. Initialized to 1 by corner detectors.

error  A tracking method specific error. Initialized to 0 by corner detectors.
4.1.5. vx_line2d_t
line struct
typedef struct _vx_line2d_t {
vx_float32 start_x;
vx_float32 start_y;
vx_float32 end_x;
vx_float32 end_y;
} vx_line2d_t;

start_x  x index of line start

start_y  y index of line start

end_x  x index of line end

end_y  y index of line end
4.1.6. vx_rectangle_t
The rectangle data structure that is shared with the users. The area of the rectangle can be computed as (end_x  start_x) * (end_y  start_y).
typedef struct _vx_rectangle_t {
vx_uint32 start_x;
vx_uint32 start_y;
vx_uint32 end_x;
vx_uint32 end_y;
} vx_rectangle_t;

start_x  The Start X coordinate.

start_y  The Start Y coordinate.

end_x  The End X coordinate.

end_y  The End Y coordinate.
4.2. Macros
4.2.1. VX_ATTRIBUTE_BASE
Defines the manner in which to combine the Vendor and Object IDs to get the base value of the enumeration.
#define VX_ATTRIBUTE_BASE(vendor,object) (((vendor) << 20)  (object << 8))
4.2.2. VX_ATTRIBUTE_ID_MASK
An object’s attribute ID is within the range of [0,2^{8}  1] (inclusive).
#define VX_ATTRIBUTE_ID_MASK (0x000000FF)
4.2.3. VX_DF_IMAGE
Converts a set of four chars into a uint32_t
container of a
VX_DF_IMAGE
code.
#define VX_DF_IMAGE(a,b,c,d) ((a)  (b << 8)  (c << 16)  (d << 24))
Note
Use a 
4.2.4. VX_ENUM_BASE
Defines the manner in which to combine the Vendor and Object IDs to get the base value of the enumeration.
#define VX_ENUM_BASE(vendor,id) (((vendor) << 20)  (id << 12))
From any enumerated value (with exceptions), the vendor, and enumeration
type should be extractable.
Those types that are exceptions are vx_vendor_id_e
, vx_type_e
,
vx_enum_e
, vx_df_image_e
, and vx_bool
.
4.2.5. VX_ENUM_MASK
A generic enumeration list can have values between [0,2^{12}  1] (inclusive).
#define VX_ENUM_MASK (0x00000FFF)
4.2.6. VX_ENUM_TYPE
A macro to extract the enum type from an enumerated value.
#define VX_ENUM_TYPE(e) (((vx_uint32)(e) & VX_ENUM_TYPE_MASK) >> 12)
4.2.7. VX_ENUM_TYPE_MASK
A type of enumeration. The valid range is between [0,2^{8}  1] (inclusive).
#define VX_ENUM_TYPE_MASK (0x000FF000)
4.2.8. VX_FMT_REF
Use to aid in debugging values in OpenVX.
#if defined(_WIN32)  defined(UNDER_CE)
#if defined(_WIN64)
#define VX_FMT_REF "%I64u"
#else
#define VX_FMT_REF "%lu"
#endif
#else
#define VX_FMT_REF "%p"
#endif
4.2.9. VX_FMT_SIZE
Use to aid in debugging values in OpenVX.
#if defined(_WIN32)  defined(UNDER_CE)
#if defined(_WIN64)
#define VX_FMT_SIZE "%I64u"
#else
#define VX_FMT_SIZE "%lu"
#endif
#else
#define VX_FMT_SIZE "%zu"
#endif
4.2.10. VX_KERNEL_BASE
Defines the manner in which to combine the Vendor and Library IDs to get the base value of the enumeration.
#define VX_KERNEL_BASE(vendor,lib) (((vendor) << 20)  (lib << 12))
4.2.11. VX_KERNEL_MASK
An individual kernel in a library has its own unique ID within [0,2^{12}  1] (inclusive).
#define VX_KERNEL_MASK (0x00000FFF)
4.2.12. VX_LIBRARY
A macro to extract the kernel library enumeration from a enumerated kernel value.
#define VX_LIBRARY(e) (((vx_uint32)(e) & VX_LIBRARY_MASK) >> 12)
4.2.13. VX_LIBRARY_MASK
A library is a set of vision kernels with its own ID supplied by a vendor. The vendor defines the library ID. The range is [0,2^{8}  1] inclusive.
#define VX_LIBRARY_MASK (0x000FF000)
4.2.14. VX_MAX_LOG_MESSAGE_LEN
Defines the length of a message buffer to copy from the log, including the trailing zero.
#define VX_MAX_LOG_MESSAGE_LEN (1024)
4.2.15. VX_SCALE_UNITY
Use to indicate the 1:1 ratio in Q22.10 format.
#define VX_SCALE_UNITY (1024u)
4.2.16. VX_TYPE
A macro to extract the type from an enumerated attribute value.
#define VX_TYPE(e) (((vx_uint32)(e) & VX_TYPE_MASK) >> 8)
4.2.17. VX_TYPE_MASK
A type mask removes the scalar/object type from the attribute. It is 3 nibbles in size and is contained between the third and second byte.
#define VX_TYPE_MASK (0x000FFF00)
See also: vx_type_e
4.2.18. VX_VENDOR
A macro to extract the vendor ID from the enumerated value.
#define VX_VENDOR(e) (((vx_uint32)(e) & VX_VENDOR_MASK) >> 20)
4.2.19. VX_VENDOR_MASK
Vendor IDs are 2 nibbles in size and are located in the upper byte of the 4 bytes of an enumeration.
#define VX_VENDOR_MASK (0xFFF00000)
4.2.20. VX_VERSION
Defines the OpenVX Version Number.
#define VX_VERSION VX_VERSION_1_2
4.2.21. VX_VERSION_1_0
Defines the predefined version number for 1.0.
#define VX_VERSION_1_0 (VX_VERSION_MAJOR(1)  VX_VERSION_MINOR(0))
4.2.22. VX_VERSION_1_1
Defines the predefined version number for 1.1.
#define VX_VERSION_1_1 (VX_VERSION_MAJOR(1)  VX_VERSION_MINOR(1))
4.2.23. VX_VERSION_1_2
Defines the predefined version number for 1.2.
#define VX_VERSION_1_2 (VX_VERSION_MAJOR(1)  VX_VERSION_MINOR(2))
4.2.24. VX_VERSION_MAJOR
Defines the major version number macro.
#define VX_VERSION_MAJOR(x) (((x) & 0xFF) << 8)
4.2.25. VX_VERSION_MINOR
Defines the minor version number macro.
#define VX_VERSION_MINOR(x) (((x) & 0xFF) << 0)
4.3. Typedefs
4.3.1. vx_bool
A formal boolean type with known fixed size.
typedef vx_enum vx_bool;
See also: vx_bool_e
4.3.2. vx_char
An 8 bit ASCII character.
typedef char vx_char;
4.3.3. vx_df_image
Used to hold a VX_DF_IMAGE
code to describe the pixel format and color
space.
typedef uint32_t vx_df_image;
4.3.4. vx_enum
Sets the standard enumeration type size to be a fixed quantity.
typedef int32_t vx_enum;
All enumerable fields must use this type as the container to enforce enumeration ranges and sizeof() operations.
4.3.5. vx_float32
A 32bit float value.
typedef float vx_float32;
4.3.6. vx_float64
A 64bit float value (aka double).
typedef double vx_float64;
4.3.7. vx_int16
A 16bit signed value.
typedef int16_t vx_int16;
4.3.8. vx_int32
A 32bit signed value.
typedef int32_t vx_int32;
4.3.9. vx_int64
A 64bit signed value.
typedef int64_t vx_int64;
4.3.10. vx_int8
An 8bit signed value.
typedef int8_t vx_int8;
4.3.11. vx_size
A wrapper of size_t
to keep the naming convention uniform.
typedef size_t vx_size;
4.3.12. vx_status
A formal status type with known fixed size.
typedef vx_enum vx_status;
See also: vx_status_e
4.3.13. vx_uint16
A 16bit unsigned value.
typedef uint16_t vx_uint16;
4.3.14. vx_uint32
A 32bit unsigned value.
typedef uint32_t vx_uint32;
4.3.15. vx_uint64
A 64bit unsigned value.
typedef uint64_t vx_uint64;
4.3.16. vx_uint8
An 8bit unsigned value.
typedef uint8_t vx_uint8;
4.4. Enumerations
4.4.1. vx_bool_e
A Boolean value. This allows 0 to be FALSE, as it is in C, and any nonzero to be TRUE.
enum vx_bool_e {
vx_false_e = 0,
vx_true_e = 1,
};
vx_bool ret = vx_true_e;
if (ret) printf("true!\n");
ret = vx_false_e;
if (!ret) printf("false!\n");
This would print both strings.
See also: vx_bool
Enumerator
4.4.2. vx_channel_e
The channel enumerations for channel extractions.
enum vx_channel_e {
VX_CHANNEL_0 = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x0,
VX_CHANNEL_1 = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x1,
VX_CHANNEL_2 = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x2,
VX_CHANNEL_3 = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x3,
VX_CHANNEL_R = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x10,
VX_CHANNEL_G = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x11,
VX_CHANNEL_B = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x12,
VX_CHANNEL_A = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x13,
VX_CHANNEL_Y = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x14,
VX_CHANNEL_U = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x15,
VX_CHANNEL_V = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CHANNEL) + 0x16,
};
Enumerator

VX_CHANNEL_R
 Use to extract the RED channel, no matter the byte or packing order. 
VX_CHANNEL_G
 Use to extract the GREEN channel, no matter the byte or packing order. 
VX_CHANNEL_B
 Use to extract the BLUE channel, no matter the byte or packing order. 
VX_CHANNEL_A
 Use to extract the ALPHA channel, no matter the byte or packing order. 
VX_CHANNEL_Y
 Use to extract the LUMA channel, no matter the byte or packing order. 
VX_CHANNEL_U
 Use to extract the Cb/U channel, no matter the byte or packing order. 
VX_CHANNEL_V
 Use to extract the Cr/V/Value channel, no matter the byte or packing order.
4.4.3. vx_convert_policy_e
The Conversion Policy Enumeration.
enum vx_convert_policy_e {
VX_CONVERT_POLICY_WRAP = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CONVERT_POLICY) + 0x0,
VX_CONVERT_POLICY_SATURATE = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_CONVERT_POLICY) + 0x1,
};
Enumerator
4.4.4. vx_df_image_e
Based on the VX_DF_IMAGE definition.
enum vx_df_image_e {
VX_DF_IMAGE_VIRT = VX_DF_IMAGE('V','I','R','T'),
VX_DF_IMAGE_RGB = VX_DF_IMAGE('R','G','B','2'),
VX_DF_IMAGE_RGBX = VX_DF_IMAGE('R','G','B','A'),
VX_DF_IMAGE_NV12 = VX_DF_IMAGE('N','V','1','2'),
VX_DF_IMAGE_NV21 = VX_DF_IMAGE('N','V','2','1'),
VX_DF_IMAGE_UYVY = VX_DF_IMAGE('U','Y','V','Y'),
VX_DF_IMAGE_YUYV = VX_DF_IMAGE('Y','U','Y','V'),
VX_DF_IMAGE_IYUV = VX_DF_IMAGE('I','Y','U','V'),
VX_DF_IMAGE_YUV4 = VX_DF_IMAGE('Y','U','V','4'),
VX_DF_IMAGE_U8 = VX_DF_IMAGE('U','0','0','8'),
VX_DF_IMAGE_U16 = VX_DF_IMAGE('U','0','1','6'),
VX_DF_IMAGE_S16 = VX_DF_IMAGE('S','0','1','6'),
VX_DF_IMAGE_U32 = VX_DF_IMAGE('U','0','3','2'),
VX_DF_IMAGE_S32 = VX_DF_IMAGE('S','0','3','2'),
};
Note
Use 
Enumerator

VX_DF_IMAGE_RGB
 A single plane of 24bit pixel as 3 interleaved 8bit units of R then G then B data. This uses the BT709 full range by default. 
VX_DF_IMAGE_RGBX
 A single plane of 32bit pixel as 4 interleaved 8bit units of R then G then B data, then a don’t care byte. This uses the BT709 full range by default. 
VX_DF_IMAGE_NV12
 A 2plane YUV format of Luma (Y) and interleaved UV data at 4:2:0 sampling. This uses the BT709 full range by default. 
VX_DF_IMAGE_NV21
 A 2plane YUV format of Luma (Y) and interleaved VU data at 4:2:0 sampling. This uses the BT709 full range by default. 
VX_DF_IMAGE_UYVY
 A single plane of 32bit macro pixel of U0, Y0, V0, Y1 bytes. This uses the BT709 full range by default. 
VX_DF_IMAGE_YUYV
 A single plane of 32bit macro pixel of Y0, U0, Y1, V0 bytes. This uses the BT709 full range by default. 
VX_DF_IMAGE_IYUV
 A 3 plane of 8bit 4:2:0 sampled Y, U, V planes. This uses the BT709 full range by default. 
VX_DF_IMAGE_YUV4
 A 3 plane of 8 bit 4:4:4 sampled Y, U, V planes. This uses the BT709 full range by default. 
VX_DF_IMAGE_U8
 A single plane of unsigned 8bit data. The range of data is not specified, as it may be extracted from a YUV or generated. 
VX_DF_IMAGE_U16
 A single plane of unsigned 16bit data. The range of data is not specified, as it may be extracted from a YUV or generated. 
VX_DF_IMAGE_S16
 A single plane of signed 16bit data. The range of data is not specified, as it may be extracted from a YUV or generated. 
VX_DF_IMAGE_U32
 A single plane of unsigned 32bit data. The range of data is not specified, as it may be extracted from a YUV or generated. 
VX_DF_IMAGE_S32
 A single plane of unsigned 32bit data. The range of data is not specified, as it may be extracted from a YUV or generated.
4.4.5. vx_enum_e
The set of supported enumerations in OpenVX.
enum vx_enum_e {
VX_ENUM_DIRECTION = 0x00,
VX_ENUM_ACTION = 0x01,
VX_ENUM_HINT = 0x02,
VX_ENUM_DIRECTIVE = 0x03,
VX_ENUM_INTERPOLATION = 0x04,
VX_ENUM_OVERFLOW = 0x05,
VX_ENUM_COLOR_SPACE = 0x06,
VX_ENUM_COLOR_RANGE = 0x07,
VX_ENUM_PARAMETER_STATE = 0x08,
VX_ENUM_CHANNEL = 0x09,
VX_ENUM_CONVERT_POLICY = 0x0A,
VX_ENUM_THRESHOLD_TYPE = 0x0B,
VX_ENUM_BORDER = 0x0C,
VX_ENUM_COMPARISON = 0x0D,
VX_ENUM_MEMORY_TYPE = 0x0E,
VX_ENUM_TERM_CRITERIA = 0x0F,
VX_ENUM_NORM_TYPE = 0x10,
VX_ENUM_ACCESSOR = 0x11,
VX_ENUM_ROUND_POLICY = 0x12,
VX_ENUM_TARGET = 0x13,
VX_ENUM_BORDER_POLICY = 0x14,
VX_ENUM_GRAPH_STATE = 0x15,
VX_ENUM_NONLINEAR = 0x16,
VX_ENUM_PATTERN = 0x17,
VX_ENUM_LBP_FORMAT = 0x18,
VX_ENUM_COMP_METRIC = 0x19,
VX_ENUM_SCALAR_OPERATION = 0x20,
};
These can be extracted from enumerated values using VX_ENUM_TYPE
.
Enumerator
4.4.6. vx_interpolation_type_e
The image reconstruction filters supported by image resampling operations.
enum vx_interpolation_type_e {
VX_INTERPOLATION_NEAREST_NEIGHBOR = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_INTERPOLATION) + 0x0,
VX_INTERPOLATION_BILINEAR = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_INTERPOLATION) + 0x1,
VX_INTERPOLATION_AREA = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_INTERPOLATION) + 0x2,
};
The edge of a pixel is interpreted as being aligned to the edge of the image. The value for an output pixel is evaluated at the center of that pixel.
This means, for example, that an even enlargement of a factor of two in nearestneighbor interpolation will replicate every source pixel into a 2x2 quad in the destination, and that an even shrink by a factor of two in bilinear interpolation will create each destination pixel by average a 2x2 quad of source pixels.
Samples that cross the boundary of the source image have values determined
by the border mode  see vx_border_e
and VX_NODE_BORDER
.
See also: vxuScaleImage
, vxScaleImageNode
,
VX_KERNEL_SCALE_IMAGE
, vxuWarpAffine
, vxWarpAffineNode
,
VX_KERNEL_WARP_AFFINE
, vxuWarpPerspective
,
vxWarpPerspectiveNode
, VX_KERNEL_WARP_PERSPECTIVE
Enumerator

VX_INTERPOLATION_NEAREST_NEIGHBOR
 Output values are defined to match the source pixel whose center is nearest to the sample position. 
VX_INTERPOLATION_BILINEAR
 Output values are defined by bilinear interpolation between the pixels whose centers are closest to the sample position, weighted linearly by the distance of the sample from the pixel centers. 
VX_INTERPOLATION_AREA
 Output values are determined by averaging the source pixels whose areas fall under the area of the destination pixel, projected onto the source image.
4.4.7. vx_non_linear_filter_e
An enumeration of nonlinear filter functions.
enum vx_non_linear_filter_e {
VX_NONLINEAR_FILTER_MEDIAN = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_NONLINEAR) + 0x0,
VX_NONLINEAR_FILTER_MIN = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_NONLINEAR) + 0x1 ,
VX_NONLINEAR_FILTER_MAX = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_NONLINEAR) + 0x2,
};
Enumerator
4.4.8. vx_pattern_e
An enumeration of matrix patterns.
See vxCreateMatrixFromPattern
and
vxCreateMatrixFromPatternAndOrigin
enum vx_pattern_e {
VX_PATTERN_BOX = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_PATTERN) + 0x0,
VX_PATTERN_CROSS = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_PATTERN) + 0x1 ,
VX_PATTERN_DISK = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_PATTERN) + 0x2,
VX_PATTERN_OTHER = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_PATTERN) + 0x3,
};
Enumerator
4.4.9. vx_status_e
The enumeration of all status codes.
enum vx_status_e {
VX_STATUS_MIN = 25,
VX_ERROR_REFERENCE_NONZERO = 24,
VX_ERROR_MULTIPLE_WRITERS = 23,
VX_ERROR_GRAPH_ABANDONED = 22,
VX_ERROR_GRAPH_SCHEDULED = 21,
VX_ERROR_INVALID_SCOPE = 20,
VX_ERROR_INVALID_NODE = 19,
VX_ERROR_INVALID_GRAPH = 18,
VX_ERROR_INVALID_TYPE = 17,
VX_ERROR_INVALID_VALUE = 16,
VX_ERROR_INVALID_DIMENSION = 15,
VX_ERROR_INVALID_FORMAT = 14,
VX_ERROR_INVALID_LINK = 13,
VX_ERROR_INVALID_REFERENCE = 12,
VX_ERROR_INVALID_MODULE = 11,
VX_ERROR_INVALID_PARAMETERS = 10,
VX_ERROR_OPTIMIZED_AWAY = 9,
VX_ERROR_NO_MEMORY = 8,
VX_ERROR_NO_RESOURCES = 7,
VX_ERROR_NOT_COMPATIBLE = 6,
VX_ERROR_NOT_ALLOCATED = 5,
VX_ERROR_NOT_SUFFICIENT = 4,
VX_ERROR_NOT_SUPPORTED = 3,
VX_ERROR_NOT_IMPLEMENTED = 2,
VX_FAILURE = 1,
VX_SUCCESS = 0,
};
See also: vx_status
.
Enumerator

VX_STATUS_MIN
 Indicates the lower bound of status codes in VX. Used for bounds checks only. 
VX_ERROR_REFERENCE_NONZERO
 Indicates that an operation did not complete due to a reference count being nonzero. 
VX_ERROR_MULTIPLE_WRITERS
 Indicates that the graph has more than one node outputting to the same data object. This is an invalid graph structure. 
VX_ERROR_GRAPH_ABANDONED
 Indicates that the graph is stopped due to an error or a callback that abandoned execution. 
VX_ERROR_GRAPH_SCHEDULED
 Indicates that the supplied graph already has been scheduled and may be currently executing. 
VX_ERROR_INVALID_SCOPE
 Indicates that the supplied parameter is from another scope and cannot be used in the current scope. 
VX_ERROR_INVALID_NODE
 Indicates that the supplied node could not be created. 
VX_ERROR_INVALID_GRAPH
 Indicates that the supplied graph has invalid connections (cycles). 
VX_ERROR_INVALID_TYPE
 Indicates that the supplied type parameter is incorrect. 
VX_ERROR_INVALID_VALUE
 Indicates that the supplied parameter has an incorrect value. 
VX_ERROR_INVALID_DIMENSION
 Indicates that the supplied parameter is too big or too small in dimension. 
VX_ERROR_INVALID_FORMAT
 Indicates that the supplied parameter is in an invalid format. 
VX_ERROR_INVALID_LINK
 Indicates that the link is not possible as specified. The parameters are incompatible. 
VX_ERROR_INVALID_REFERENCE
 Indicates that the reference provided is not valid. 
VX_ERROR_INVALID_MODULE
 This is returned fromvxLoadKernels
when the module does not contain the entry point. 
VX_ERROR_INVALID_PARAMETERS
 Indicates that the supplied parameter information does not match the kernel contract. 
VX_ERROR_OPTIMIZED_AWAY
 Indicates that the object refered to has been optimized out of existence. 
VX_ERROR_NO_MEMORY
 Indicates that an internal or implicit allocation failed. Typically catastrophic. After detection, deconstruct the context.See also:
vxVerifyGraph
. 
VX_ERROR_NO_RESOURCES
 Indicates that an internal or implicit resource can not be acquired (not memory). This is typically catastrophic. After detection, deconstruct the context.See also:
vxVerifyGraph
. 
VX_ERROR_NOT_COMPATIBLE
 Indicates that the attempt to link two parameters together failed due to type incompatibilty. 
VX_ERROR_NOT_ALLOCATED
 Indicates to the system that the parameter must be allocated by the system. 
VX_ERROR_NOT_SUFFICIENT
 Indicates that the given graph has failed verification due to an insufficient number of required parameters, which cannot be automatically created. Typically this indicates required atomic parameters.See also:
vxVerifyGraph
. 
VX_ERROR_NOT_SUPPORTED
 Indicates that the requested set of parameters produce a configuration that cannot be supported. Refer to the supplied documentation on the configured kernels.See also:
vx_kernel_e
. This is also returned if a function to set an attribute is called on a Readonly attribute. 
VX_ERROR_NOT_IMPLEMENTED
 Indicates that the requested kernel is missing.See also:
vx_kernel_e
vxGetKernelByName
. 
VX_FAILURE
 Indicates a generic error code, used when no other describes the error.
4.4.10. vx_target_e
The Target Enumeration.
enum vx_target_e {
VX_TARGET_ANY = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_TARGET) + 0x0000,
VX_TARGET_STRING = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_TARGET) + 0x0001,
VX_TARGET_VENDOR_BEGIN = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_TARGET) + 0x1000,
};
Enumerator
4.4.11. vx_type_e
The type enumeration lists all the known types in OpenVX.
enum vx_type_e {
VX_TYPE_INVALID = 0x000,
VX_TYPE_CHAR = 0x001,
VX_TYPE_INT8 = 0x002,
VX_TYPE_UINT8 = 0x003,
VX_TYPE_INT16 = 0x004,
VX_TYPE_UINT16 = 0x005,
VX_TYPE_INT32 = 0x006,
VX_TYPE_UINT32 = 0x007,
VX_TYPE_INT64 = 0x008,
VX_TYPE_UINT64 = 0x009,
VX_TYPE_FLOAT32 = 0x00A,
VX_TYPE_FLOAT64 = 0x00B,
VX_TYPE_ENUM = 0x00C,
VX_TYPE_SIZE = 0x00D,
VX_TYPE_DF_IMAGE = 0x00E,
VX_TYPE_FLOAT16 = 0x00F,
VX_TYPE_BOOL = 0x010,
VX_TYPE_RECTANGLE = 0x020,
VX_TYPE_KEYPOINT = 0x021,
VX_TYPE_COORDINATES2D = 0x022,
VX_TYPE_COORDINATES3D = 0x023,
VX_TYPE_COORDINATES2DF = 0x024,
VX_TYPE_HOG_PARAMS = 0x028,
VX_TYPE_HOUGH_LINES_PARAMS = 0x029,
VX_TYPE_LINE_2D = 0x02A,
VX_TYPE_TENSOR_MATRIX_MULTIPLY_PARAMS = 0x02B,
VX_TYPE_USER_STRUCT_START = 0x100,
VX_TYPE_VENDOR_STRUCT_START = 0x400,
VX_TYPE_KHRONOS_OBJECT_START = 0x800,
VX_TYPE_VENDOR_OBJECT_START = 0xC00,
VX_TYPE_KHRONOS_STRUCT_MAX = VX_TYPE_USER_STRUCT_START  1,
VX_TYPE_USER_STRUCT_END = VX_TYPE_VENDOR_STRUCT_START  1,
VX_TYPE_VENDOR_STRUCT_END = VX_TYPE_KHRONOS_OBJECT_START  1,
VX_TYPE_KHRONOS_OBJECT_END = VX_TYPE_VENDOR_OBJECT_START  1,
VX_TYPE_VENDOR_OBJECT_END = 0xFFF,
VX_TYPE_REFERENCE = 0x800,
VX_TYPE_CONTEXT = 0x801,
VX_TYPE_GRAPH = 0x802,
VX_TYPE_NODE = 0x803,
VX_TYPE_KERNEL = 0x804,
VX_TYPE_PARAMETER = 0x805,
VX_TYPE_DELAY = 0x806,
VX_TYPE_LUT = 0x807,
VX_TYPE_DISTRIBUTION = 0x808,
VX_TYPE_PYRAMID = 0x809,
VX_TYPE_THRESHOLD = 0x80A,
VX_TYPE_MATRIX = 0x80B,
VX_TYPE_CONVOLUTION = 0x80C,
VX_TYPE_SCALAR = 0x80D,
VX_TYPE_ARRAY = 0x80E,
VX_TYPE_IMAGE = 0x80F,
VX_TYPE_REMAP = 0x810,
VX_TYPE_ERROR = 0x811,
VX_TYPE_META_FORMAT = 0x812,
VX_TYPE_OBJECT_ARRAY = 0x813,
VX_TYPE_TENSOR = 0x815,
};
Enumerator

VX_TYPE_INVALID
 An invalid type value. When passed an error must be returned. 
VX_TYPE_CHAR
 Avx_char
. 
VX_TYPE_INT8
 Avx_int8
. 
VX_TYPE_UINT8
 Avx_uint8
. 
VX_TYPE_INT16
 Avx_int16
. 
VX_TYPE_UINT16
 Avx_uint16
. 
VX_TYPE_INT32
 Avx_int32
. 
VX_TYPE_UINT32
 Avx_uint32
. 
VX_TYPE_INT64
 Avx_int64
. 
VX_TYPE_UINT64
 Avx_uint64
. 
VX_TYPE_FLOAT32
 Avx_float32
. 
VX_TYPE_FLOAT64
 Avx_float64
. 
VX_TYPE_SIZE
 Avx_size
. 
VX_TYPE_DF_IMAGE
 Avx_df_image
. 
VX_TYPE_BOOL
 Avx_bool
. 
VX_TYPE_RECTANGLE
 Avx_rectangle_t
. 
VX_TYPE_KEYPOINT
 Avx_keypoint_t
. 
VX_TYPE_COORDINATES2D
 Avx_coordinates2d_t
. 
VX_TYPE_COORDINATES3D
 Avx_coordinates3d_t
. 
VX_TYPE_COORDINATES2DF
 Avx_coordinates2df_t
. 
VX_TYPE_HOG_PARAMS
 Avx_hog_t
. 
VX_TYPE_HOUGH_LINES_PARAMS
 Avx_hough_lines_p_t
. 
VX_TYPE_LINE_2D
 Avx_line2d_t
. 
VX_TYPE_TENSOR_MATRIX_MULTIPLY_PARAMS
 Avx_tensor_matrix_multiply_params_t
. 
VX_TYPE_USER_STRUCT_START
 A userdefined struct base index. 
VX_TYPE_VENDOR_STRUCT_START
 A vendordefined struct base index. 
VX_TYPE_KHRONOS_OBJECT_START
 A Khronos defined object base index. 
VX_TYPE_VENDOR_OBJECT_START
 A vendor defined object base index. 
VX_TYPE_KHRONOS_STRUCT_MAX
 A value for comparison between Khronos defined structs and user structs. 
VX_TYPE_USER_STRUCT_END
 A value for comparison between user structs and vendor structs. 
VX_TYPE_VENDOR_STRUCT_END
 A value for comparison between vendor structs and Khronos defined objects. 
VX_TYPE_KHRONOS_OBJECT_END
 A value for comparison between Khronos defined objects and vendor structs. 
VX_TYPE_VENDOR_OBJECT_END
 A value used for bound checking of vendor objects. 
VX_TYPE_REFERENCE
 Avx_reference
. 
VX_TYPE_CONTEXT
 Avx_context
. 
VX_TYPE_GRAPH
 Avx_graph
. 
VX_TYPE_NODE
 Avx_node
. 
VX_TYPE_KERNEL
 Avx_kernel
. 
VX_TYPE_PARAMETER
 Avx_parameter
. 
VX_TYPE_DELAY
 Avx_delay
. 
VX_TYPE_LUT
 Avx_lut
. 
VX_TYPE_DISTRIBUTION
 Avx_distribution
. 
VX_TYPE_PYRAMID
 Avx_pyramid
. 
VX_TYPE_THRESHOLD
 Avx_threshold
. 
VX_TYPE_MATRIX
 Avx_matrix
. 
VX_TYPE_CONVOLUTION
 Avx_convolution
. 
VX_TYPE_SCALAR
 Avx_scalar
. when needed to be completely generic for kernel validation. 
VX_TYPE_ARRAY
 Avx_array
. 
VX_TYPE_IMAGE
 Avx_image
. 
VX_TYPE_REMAP
 Avx_remap
. 
VX_TYPE_META_FORMAT
 Avx_meta_format
. 
VX_TYPE_OBJECT_ARRAY
 Avx_object_array
. 
VX_TYPE_TENSOR
 Avx_tensor
.
4.4.12. vx_vendor_id_e
The Vendor ID of the Implementation. As new vendors submit their implementations, this enumeration will grow.
enum vx_vendor_id_e {
VX_ID_KHRONOS = 0x000,
VX_ID_TI = 0x001,
VX_ID_QUALCOMM = 0x002,
VX_ID_NVIDIA = 0x003,
VX_ID_ARM = 0x004,
VX_ID_BDTI = 0x005,
VX_ID_RENESAS = 0x006,
VX_ID_VIVANTE = 0x007,
VX_ID_XILINX = 0x008,
VX_ID_AXIS = 0x009,
VX_ID_MOVIDIUS = 0x00A,
VX_ID_SAMSUNG = 0x00B,
VX_ID_FREESCALE = 0x00C,
VX_ID_AMD = 0x00D,
VX_ID_BROADCOM = 0x00E,
VX_ID_INTEL = 0x00F,
VX_ID_MARVELL = 0x010,
VX_ID_MEDIATEK = 0x011,
VX_ID_ST = 0x012,
VX_ID_CEVA = 0x013,
VX_ID_ITSEEZ = 0x014,
VX_ID_IMAGINATION = 0x015,
VX_ID_NXP = 0x016,
VX_ID_VIDEANTIS = 0x017,
VX_ID_SYNOPSYS = 0x018,
VX_ID_CADENCE = 0x019,
VX_ID_HUAWEI = 0x01A,
VX_ID_SOCIONEXT = 0x01B,
VX_ID_USER = 0xFFE,
VX_ID_MAX = 0xFFF,
VX_ID_DEFAULT = VX_ID_MAX,
};
Enumerator
5. Objects
Defines the basic objects within OpenVX.
All objects in OpenVX derive from a vx_reference
and contain a
reference to the vx_context
from which they were made, except the
vx_context
itself.
Modules
5.1. Object: Reference
Defines the Reference Object interface.
All objects in OpenVX are derived (in the objectoriented sense) from
vx_reference
.
All objects shall be able to be cast back to this type safely.
Macros
Typedefs
Enumerations
Functions
5.1.1. Macros
VX_MAX_REFERENCE_NAME
Defines the length of the reference name string, including the trailing zero.
#define VX_MAX_REFERENCE_NAME (64)
See also: vxSetReferenceName
5.1.2. Typedefs
vx_reference
A generic opaque reference to any object within OpenVX.
typedef struct _vx_reference *vx_reference;
A user of OpenVX should not assume that this can be cast directly to
anything; however, any object in OpenVX can be cast back to this for the
purposes of querying attributes of the object or for passing the object as a
parameter to functions that take a vx_reference
type.
If the API does not take that specific type but may take others, an error
may be returned from the API.
5.1.3. Enumerations
vx_reference_attribute_e
The reference attributes list.
enum vx_reference_attribute_e {
VX_REFERENCE_COUNT = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_REFERENCE) + 0x0,
VX_REFERENCE_TYPE = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_REFERENCE) + 0x1,
VX_REFERENCE_NAME = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_REFERENCE) + 0x2,
};
Enumerator

VX_REFERENCE_COUNT
 Returns the reference count of the object. Readonly. Use avx_uint32
parameter. 
VX_REFERENCE_TYPE
 Returns thevx_type_e
of the reference. Readonly. Use avx_enum
parameter. 
VX_REFERENCE_NAME
 Used to query the reference for its name. This attribute can be set via thevxSetReferenceName
function. Readwrite. Use a *vx_char
parameter.
5.1.4. Functions
vxGetStatus
Provides a generic API to return status values from Object constructors if they fail.
vx_status vxGetStatus(
vx_reference reference);
Note
Users do not need to strictly check every object creator as the errors should properly propagate and be detected during verification time or runtime.

Precondition: Appropriate Object Creator function.
Postcondition: Appropriate Object Release function.
Parameters

[in]
reference  The reference to check for construction errors.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 No errors; any other value indicates failure. 
*  Some error occurred, please check enumeration list and constructor.
vxGetContext
Retrieves the context from any reference from within a context.
vx_context vxGetContext(
vx_reference reference);
Parameters

[in]
reference  The reference from which to extract the context.
Returns: The overall context that created the particular reference.
Any possible errors preventing a successful completion of this function
should be checked using vxGetStatus
.
vxQueryReference
Queries any reference type for some basic information like count or type.
vx_status vxQueryReference(
vx_reference ref,
vx_enum attribute,
void* ptr,
vx_size size);
Parameters

[in]
ref  The reference to query. 
[in]
attribute  The value for which to query. Usevx_reference_attribute_e
. 
[out]
ptr  The location at which to store the resulting value. 
[in]
size  The size in bytes of the container to which ptr points.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 No errors; any other value indicates failure. 
VX_ERROR_INVALID_REFERENCE
 ref is not a validvx_reference
reference.
vxReleaseReference
Releases a reference. The reference may potentially refer to multiple OpenVX objects of different types. This function can be used instead of calling a specific release function for each individual object type (e.g. vxRelease<object>). The object will not be destroyed until its total reference count is zero.
vx_status vxReleaseReference(
vx_reference* ref_ptr);
Note
After returning from this function the reference is zeroed. 
Parameters

[in]
ref_ptr  The pointer to the reference of the object to release.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 No errors; any other value indicates failure. 
VX_ERROR_INVALID_REFERENCE
 ref_ptr is not a validvx_reference
reference.
vxRetainReference
Increments the reference counter of an object This function is used to express the fact that the OpenVX object is referenced multiple times by an application. Each time this function is called for an object, the application will need to release the object one additional time before it can be destructed.
vx_status vxRetainReference(
vx_reference ref);
Parameters

[in]
ref  The reference to retain.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 No errors; any other value indicates failure. 
VX_ERROR_INVALID_REFERENCE
 ref is not a validvx_reference
reference.
vxSetReferenceName
Name a reference
This function is used to associate a name to a referenced object. This name can be used by the OpenVX implementation in log messages and any other reporting mechanisms.
vx_status vxSetReferenceName(
vx_reference ref,
const vx_char* name);
The OpenVX implementation will not check if the name is unique in the reference scope (context or graph). Several references can then have the same name.
Parameters

[in]
ref  The reference to the object to be named. 
[in]
name  Pointer to the '\0' terminated string that identifies the referenced object. The string is copied by the function so that it stays the property of the caller.NULL
means that the reference is not named. The length of the string shall be lower thanVX_MAX_REFERENCE_NAME
bytes.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 No errors; any other value indicates failure. 
VX_ERROR_INVALID_REFERENCE
 ref is not a validvx_reference
reference.
5.2. Object: Context
Defines the Context Object Interface.
The OpenVX context is the object domain for all OpenVX objects. All data objects live in the context as well as all framework objects. The OpenVX context keeps reference counts on all objects and must do garbage collection during its deconstruction to free lost references. While multiple clients may connect to the OpenVX context, all data are private in that the references referring to data objects are given only to the creating party.
Macros
Typedefs
Enumerations
Functions
5.2.1. Macros
VX_MAX_IMPLEMENTATION_NAME
Defines the length of the implementation name string, including the trailing zero.
#define VX_MAX_IMPLEMENTATION_NAME (64)
5.2.2. Typedefs
vx_context
An opaque reference to the implementation context.
typedef struct _vx_context *vx_context;
See also: vxCreateContext
5.2.3. Enumerations
vx_accessor_e
The memory accessor hint flags. These enumeration values are used to indicate desired system behavior, not the User intent. For example: these can be interpretted as hints to the system about cache operations or marshalling operations.
enum vx_accessor_e {
VX_READ_ONLY = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_ACCESSOR) + 0x1,
VX_WRITE_ONLY = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_ACCESSOR) + 0x2,
VX_READ_AND_WRITE = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_ACCESSOR) + 0x3,
};
Enumerator

VX_READ_ONLY
 The memory shall be treated by the system as if it were readonly. If the User writes to this memory, the results are implementation defined. 
VX_WRITE_ONLY
 The memory shall be treated by the system as if it were writeonly. If the User reads from this memory, the results are implementation defined. 
VX_READ_AND_WRITE
 The memory shall be treated by the system as if it were readable and writeable.
vx_context_attribute_e
A list of context attributes.
enum vx_context_attribute_e {
VX_CONTEXT_VENDOR_ID = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x0,
VX_CONTEXT_VERSION = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x1,
VX_CONTEXT_UNIQUE_KERNELS = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x2,
VX_CONTEXT_MODULES = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x3,
VX_CONTEXT_REFERENCES = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x4,
VX_CONTEXT_IMPLEMENTATION = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x5,
VX_CONTEXT_EXTENSIONS_SIZE = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x6,
VX_CONTEXT_EXTENSIONS = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x7,
VX_CONTEXT_CONVOLUTION_MAX_DIMENSION = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x8,
VX_CONTEXT_OPTICAL_FLOW_MAX_WINDOW_DIMENSION = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0x9,
VX_CONTEXT_IMMEDIATE_BORDER = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0xA,
VX_CONTEXT_UNIQUE_KERNEL_TABLE = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0xB,
VX_CONTEXT_IMMEDIATE_BORDER_POLICY = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0xC,
VX_CONTEXT_NONLINEAR_MAX_DIMENSION = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0xd,
VX_CONTEXT_MAX_TENSOR_DIMS = VX_ATTRIBUTE_BASE(VX_ID_KHRONOS, VX_TYPE_CONTEXT) + 0xE,
};
Enumerator

VX_CONTEXT_VENDOR_ID
 Queries the unique vendor ID. Readonly. Use avx_uint16
. 
VX_CONTEXT_VERSION
 Queries the OpenVX Version Number. Readonly. Use avx_uint16

VX_CONTEXT_UNIQUE_KERNELS
 Queries the context for the number of unique kernels. Readonly. Use avx_uint32
parameter. 
VX_CONTEXT_MODULES
 Queries the context for the number of active modules. Readonly. Use avx_uint32
parameter. 
VX_CONTEXT_REFERENCES
 Queries the context for the number of active references. Readonly. Use avx_uint32
parameter. 
VX_CONTEXT_IMPLEMENTATION
 Queries the context for it’s implementation name. Readonly. Use avx_char
[VX_MAX_IMPLEMENTATION_NAME
] array. 
VX_CONTEXT_EXTENSIONS_SIZE
 Queries the number of bytes in the extensions string. Readonly. Use avx_size
parameter. 
VX_CONTEXT_EXTENSIONS
 Retrieves the extensions string. Readonly. This is a spaceseparated string of extension names. Each OpenVX official extension has a unique identifier, comprised of capital letters, numbers and the underscore character, prefixed with "KHR_", for example "KHR_NEW_FEATURE". Use avx_char
pointer allocated to the size returned fromVX_CONTEXT_EXTENSIONS_SIZE
. 
VX_CONTEXT_CONVOLUTION_MAX_DIMENSION
 The maximum width or height of a convolution matrix. Readonly. Use avx_size
parameter. Each vendor must support centered kernels of size w X h, where both w and h are odd numbers, 3 ≤ w ≤ n and 3 ≤ h ≤ n, where n is the value of theVX_CONTEXT_CONVOLUTION_MAX_DIMENSION
attribute. n is an odd number that should not be smaller than 9. w and h may or may not be equal to each other. All combinations of w and h meeting the conditions above must be supported. The behavior ofvxCreateConvolution
is undefined for values larger than the value returned by this attribute. 
VX_CONTEXT_OPTICAL_FLOW_MAX_WINDOW_DIMENSION
 The maximum window dimension of the [OpticalFlowPyrLK] kernel. The value of this attribute shall be equal to or greater than '9'.See also:
VX_KERNEL_OPTICAL_FLOW_PYR_LK
. Readonly. Use avx_size
parameter. 
VX_CONTEXT_IMMEDIATE_BORDER
 The border mode for immediate mode functions.Graph mode functions are unaffected by this attribute. Readwrite. Use a pointer to a
vx_border_t
structure as parameter.NoteThe assumed default value for immediate mode functions is
VX_BORDER_UNDEFINED
. 
VX_CONTEXT_UNIQUE_KERNEL_TABLE
 Returns the table of all unique the kernels that exist in the context. Readonly. Use avx_kernel_info_t
array.Precondition: You must call
vxQueryContext
withVX_CONTEXT_UNIQUE_KERNELS
to compute the necessary size of the array. 
VX_CONTEXT_IMMEDIATE_BORDER_POLICY
 The unsupported border mode policy for immediate mode functions. ReadWrite.Graph mode functions are unaffected by this attribute. Use a
vx_enum
as parameter. Will contain avx_border_policy_e
.NoteThe assumed default value for immediate mode functions is
VX_BORDER_POLICY_DEFAULT_TO_UNDEFINED
. Users should refer to the documentation of their implementation to determine what border modes are supported by each kernel. 
VX_CONTEXT_NONLINEAR_MAX_DIMENSION
 The dimension of the largest nonlinear filter supported. SeevxNonLinearFilterNode
.The implementation must support all dimensions (height or width, not necessarily the same) up to the value of this attribute. The lowest value that must be supported for this attribute is 9. Readonly. Use a
vx_size
parameter. 
VX_CONTEXT_MAX_TENSOR_DIMS
 tensor Data maximal number of dimensions supported by the implementation.
vx_memory_type_e
An enumeration of memory import types.
enum vx_memory_type_e {
VX_MEMORY_TYPE_NONE = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_MEMORY_TYPE) + 0x0,
VX_MEMORY_TYPE_HOST = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_MEMORY_TYPE) + 0x1,
};
Enumerator
vx_round_policy_e
The Round Policy Enumeration.
enum vx_round_policy_e {
VX_ROUND_POLICY_TO_ZERO = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_ROUND_POLICY) + 0x1,
VX_ROUND_POLICY_TO_NEAREST_EVEN = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_ROUND_POLICY) + 0x2,
};
Enumerator
vx_termination_criteria_e
The termination criteria list.
enum vx_termination_criteria_e {
VX_TERM_CRITERIA_ITERATIONS = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_TERM_CRITERIA) + 0x0,
VX_TERM_CRITERIA_EPSILON = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_TERM_CRITERIA) + 0x1,
VX_TERM_CRITERIA_BOTH = VX_ENUM_BASE(VX_ID_KHRONOS, VX_ENUM_TERM_CRITERIA) + 0x2,
};
See also: Optical Flow Pyramid (LK)
Enumerator

VX_TERM_CRITERIA_ITERATIONS
 Indicates a termination after a set number of iterations. 
VX_TERM_CRITERIA_EPSILON
 Indicates a termination after matching against the value of eplison provided to the function. 
VX_TERM_CRITERIA_BOTH
 Indicates that both an iterations and eplison method are employed. Whichever one matches first causes the termination.
5.2.4. Functions
vxCreateContext
Creates a vx_context
.
vx_context vxCreateContext(void);
This creates a toplevel object context for OpenVX.
Note
This is required to do anything else. 
Returns: The reference to the implementation context vx_context
.
Any possible errors preventing a successful creation should be checked using
vxGetStatus
.
Postcondition: vxReleaseContext
vxQueryContext
Queries the context for some specific information.
vx_status vxQueryContext(
vx_context context,
vx_enum attribute,
void* ptr,
vx_size size);
Parameters

[in]
context  The reference to the context. 
[in]
attribute  The attribute to query. Use avx_context_attribute_e
. 
[out]
ptr  The location at which to store the resulting value. 
[in]
size  The size in bytes of the container to which ptr points.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 No errors; any other value indicates failure. 
VX_ERROR_INVALID_REFERENCE
 context is not a validvx_context
reference. 
VX_ERROR_INVALID_PARAMETERS
 If any of the other parameters are incorrect. 
VX_ERROR_NOT_SUPPORTED
 If the attribute is not supported on this implementation.
vxReleaseContext
Releases the OpenVX object context.
vx_status vxReleaseContext(
vx_context* context);
All reference counted objects are garbagecollected by the return of this
call.
No calls are possible using the parameter context after the context has been
released until a new reference from vxCreateContext
is returned.
All outstanding references to OpenVX objects from this context are invalid
after this call.
Parameters

[in]
context  The pointer to the reference to the context.
Postcondition: After returning from this function the reference is zeroed.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 No errors; any other value indicates failure. 
VX_ERROR_INVALID_REFERENCE
 context is not a validvx_context
reference.
Precondition: vxCreateContext
vxSetContextAttribute
Sets an attribute on the context.
vx_status vxSetContextAttribute(
vx_context context,
vx_enum attribute,
const void* ptr,
vx_size size);
Parameters

[in]
context  The handle to the overall context. 
[in]
attribute  The attribute to set fromvx_context_attribute_e
. 
[in]
ptr  The pointer to the data to which to set the attribute. 
[in]
size  The size in bytes of the data to which ptr points.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 No errors; any other value indicates failure. 
VX_ERROR_INVALID_REFERENCE
 context is not a validvx_context
reference. 
VX_ERROR_INVALID_PARAMETERS
 If any of the other parameters are incorrect. 
VX_ERROR_NOT_SUPPORTED
 If the attribute is not settable.
vxSetImmediateModeTarget
Sets the default target of the immediate mode. Upon successful execution of this function any future execution of immediate mode function is attempted on the new default target of the context.
vx_status vxSetImmediateModeTarget(
vx_context context,
vx_enum target_enum,
const char* target_string);
Parameters

[in]
context  The reference to the implementation context. 
[in]
target_enum  The default immediate mode target enum to be set to thevx_context
object. Use avx_target_e
. 
[in]
target_string  The target name ASCII string. This contains a valid value when target_enum is set toVX_TARGET_STRING
, otherwise it is ignored.
Returns: A vx_status_e
enumeration.
Return Values

VX_SUCCESS
 Default target set; any other value indicates failure. 
VX_ERROR_INVALID_REFERENCE
 If the context is not a validvx_context
reference. 
VX_ERROR_NOT_SUPPORTED
 If the specified target is not supported in this context.
5.3. Object: Graph
Defines the Graph Object interface.
A set of nodes connected in a directed (only goes oneway) acyclic (does not
loop back) fashion.
A Graph may have sets of Nodes that are unconnected to other sets of Nodes
within the same Graph.
See Graph Formalisms.
Figure below shows the Graph state transition diagram.
Also see vx_graph_state_e
.