OpenVX™ is an open, royalty-free standard for cross platform acceleration of computer vision applications. OpenVX enables performance and power-optimized computer vision processing, especially important in embedded and real-time use cases such as face, body and gesture tracking, smart video surveillance, advanced driver assistance systems (ADAS), object and scene reconstruction, augmented reality, visual inspection, robotics and more.
OpenVX extends easily with reusable vision acceleration functions to every low-power domain. This provides a key advantage, promoting wide adoption, for OpenVX and for developers this delivers the following:
OpenVX allows graph-level processing optimizations, which lets implementations to fuse nodes when possible to achieve better overall performance. The graph also allows for auto graph-level memory optimizations to achieve a low memory footprint. OpenVX graph-optimized workloads can be deployed on a wide range of computer hardware, including small embedded CPUs, ASICs, APUs, discrete GPUs, and heterogeneous servers.
Implementers may use OpenCL or compute shaders to implement OpenVX nodes on programmable processors. Developers can use OpenVX to easily connect those nodes into a graph. The OpenVX graph enables implementers to optimize execution across diverse hardware architectures. OpenVX enables the graph to be extended to include hardware architectures that don’t support programmable APIs.
Now that the OpenVX API has grown to an extensive set of functions, there is interest in creating implementations that target a set of features rather than covering the entire OpenVX API. In order to offer this option while still managing the API to prevent excessive fragmentation regarding which implementations offer which features, the OpenVX 1.3 specification defines a collection of feature sets that form coherent and useful subsets of the OpenVX API. These feature sets include the following:
Along with the release of OpenVX 1.3, the pipelining, neural network, and import kernel extensions are being updated. For the list of all extensions and features, go to the OpenVX registry .
The OpenVX specification and conformance tests were released in 2014. This was followed by the version 1.0.1 specification and open source sample implementation in 2015, version 1.1 at the Embedded Vision Summit in 2016, and version 1.2 was released in 2017 at the Embedded Vision Summit.
To enable deployment flexibility while avoiding fragmentation, OpenVX 1.3 defines a number of feature sets that are targeted at common embedded use cases. Hardware vendors can include one or more complete feature sets in their implementations to meet the needs of their customers and be fully conformant. The flexibility of OpenVX enables deployment on a diverse range of accelerator architectures, and feature sets are expected to dramatically increase the breadth and diversity of available OpenVX implementations. The defined OpenVX 1.3 feature sets include:
|Implementation||Community driven open source library||Callable library implemented and shipped by hardware vendors|
|Conformance||Extensive OpenCV Test Suite but no formal Adopters program||Implementations pass defined conformance test suite to use trademark|
1000s of imaging and vision functions
Multiple camera APIs/interfaces
|Tight focus on core hardware accelerated functions for mobile vision and inferencing. Uses external camera drivers|
|Acceleration||OpenCV 3.0 Transparent API (or T-API) enables function offload to OpenCL devices||Implementation free to use any underlying API such as OpenCL. Uses OpenCL for custom Nodes|
|Efficiency||OpenCV 4.0 G-API graph model for some filters, arithmetic/binary operations, and well-defined geometrical transformations||Graph-based execution of all Nodes.
Optimizable computation and data transfer
|Inferencing||Deep Neural Network module. API to construct neural networks from layers for forward pass computations only. Import from ONNX, TensorFlow, Torch, Caffe||Neural Network layers and operations represented directly in the OpenVX Graph. NNEF direct import|
“As a working group, we’ve invested a lot in creating an extensive set of functions that can meet all the needs of OpenVX users. There has been interest in creating implementations that target only a subset of the features that are specific to and necessary for the application. We’ve built OpenVX 1.3 with flexibility in mind, to offer a menu of options for users who want to stay conformant but don’t need the entire specification for their application. We believe this work increases performance portability and scalability of OpenVX across vendors, enabling greater ease of implementation and promoting adoption of the standard while still enabling interoperability.”
“AMD has always supported open, royalty-free standards for HPC and Machine Learning, we believe this will benefit the research community and the industry as a whole. AMD was the first to open source highly optimized implementation of OpenVX in MIVisionX Toolkit as part of the ROCm Ecosystem which is being used by many in the industry and academia. OpenVX 1.3 with extensive support to computer vision and machine learning will help keep up the momentum in the industry.”
“We are excited to be a partner to Khronos in developing the CTS and samples for Version 1.3 and porting it to Raspberry Pi. This will provide guidance to developers in the ecosystem and enable them to develop a wider range of applications more quickly using a smaller memory footprint while achieving better performance. This is an exciting next step in the march towards more capable computer vision and machine learning systems and MulticoreWare is proud to be a leader in this ecosystem.”
“ICURO has been collaborating with AMD in proliferating computer vision machine learning models. ICURO welcomes and supports the adoption of OpenVX 1.3 for innovative business use cases across multiple industries. Our artificial intelligence (AI) lab in Silicon Valley has accelerated the development and deployment of full-stack robotic vision applications powered by AMD edge processors and OpenVX stack. We are delighted to be a strategic partner of AMD in delivering high-value, high-return AI solutions for retail, industry 4.0, warehouse, logistics, healthcare, and several other industries.”
“Texas Instruments reinforces our support of OpenVX and its benefits to customers developing ADAS-to-autonomous applications for the automotive market. The OpenVX standard helps us to offer an easy-to-use SDK platform for customers developing embedded applications on multi-core, heterogeneous architectures such as TI’s Driver Assist (TDAx) SOCs.”