The curriculum for the 2018 OpenVX Workshop at the Embedded Vision Summit in May has been finalized. The Khronos Group will be presenting a day-long hands-on workshop all about OpenVX cross-platform neural network acceleration API for embedded vision applications. Khronos has developed a new curriculum making this a do-not-miss tutorial with new information on computer vision algorithms for feature tracking and neural networks mapped to the graph API. The tutorials will be presented by speakers from Khronos member companies AMD, Axis Communications, Cadence and Codeplay. There will be hands-on practice sessions with the folks who created the OpenVX API to give participants a chance to solve real computer vision problems. Discussions will also include the OpenVX roadmap and what’s to come. Registration is now open but space is limited, so be sure not to wait too long.
Don’t miss this year’s OpenVX Workshop at Embedded Vision Summit on May 24th, 2018. Khronos will present a day-long hands-on workshop all about OpenVX cross-platform neural network acceleration API for embedded vision applications. We’ve developed a new curriculum so even if you attended in past years, this is a do-not-miss, jam-packed tutorial with new information on computer vision algorithms for feature tracking and neural networks mapped to the graph API. We’ll be doing a hands-on practice session that gives participants a chance to solve real computer vision problems using OpenVX with the folks who created the API. We’ll also be talking about the OpenVX roadmap and what’s to come.
Don’t miss this year’s OpenVX Workshop at Embedded Vision Summit. Khronos will present a day-long hands-on workshop all about OpenVX cross-platform neural network acceleration API for embedded vision applications. We’ve developed a new curriculum so even if you attended in past years, this is a do-not-miss, jam-packed tutorial with new information on computer vision algorithms for feature tracking and neural networks mapped to the graph API. We’ll be doing a hands-on practice session that gives participants a chance to solve real computer vision problems using OpenVX with the folks who created the API. We’ll also be talking about the OpenVX roadmap and what’s to come. Registration is now open. Early bird pricing ends April 10th.
Registration now open for the Khronos Standards for Neural Networks and Embedded Vision workshop at the Embedded Vision Summit in Santa Clara. Early bird pricing is now $99. This seminar is intended for engineers, researchers, and software developers who develop vision and neural network applications and want to benefit from transparent HW acceleration. Also, managers that want to get a general understanding of the structure and uses of Khronos standards.
The Khronos Group has posted two new RFQs, both for NNEF:
Caffe2 to NNEF Converter: The project will deliver Caffe2 to NNEF converter that receives a set of Caffe2 protobuf files and generates semantically and functionally equivalent NNEF container.
TensorFlow to NNEF Converter: The project will deliver a converter between Tensorflow and NNEF that receives a TensorFlow protobuf file and generates semantically and functionally equivalent NNEF container, and is able to convert the NNEF container back to a TensorFlow protobuf file which when executed in TensorFlow produces equivalent results with the original source of conversion (although the backward conversion may not result in a one equivalent to the original protobuf).
VeriSilicon today announced significant milestones have been achieved for its versatile and highly scalable neural network inference engine family VIP8000. The fully programmable VIP8000 processors reach the performance and memory efficiency of dedicated fixed-function logic with the customizability and future proofing of full programmability in OpenCL, OpenVX, and a wide range of NN frameworks including NNEF. “The biggest thing to happen in the computer industry since the PC is AI and machine learning, it will truly revolutionize, empower, and improve our lives. It can be done in giant machines from IBM and Google, and in tiny chips made with VeriSilicon’s neural network processors,” said Dr. Jon Peddie, president Jon Peddie Research. “By 2020 we will wonder how we ever lived without our AI assistants,” he added.
The Khronos Neural Network Exchange Format, among other technologies, go a long way to enable highly optimized implementations of inference for systems trained on a range of systems. Explains Chris Rowen, CEO for Babblabs, "This is extremely valuable to opening up the path to exploit optimized high-volume inference engines in phones, cars, cameras and other IoT devices. This higher-level robust set of interfaces breaks the tyranny of instruction set compatibility as a standard for exchange and allows for greater levels of re-optimization as the inference execution hardware evolves over time." Read more on the Semiconductor Engineering blog.
Standards make life easier, and we depend on them for more than we might realize — from knowing exactly how to drive any car, to knowing how to get hot or cold water from a faucet. Balancing the need for a stable standard, while at the same time allowing technology advances to be rapidly exploited, is a big part of what Khronos does. There are two ways a Khronos standard can be extended: Vendor Extensions and Khronos Extensions. Read on to learn how both of these work within Khronos.
This podcast episode of “The Interview” with The Next Platform focuses on an effort to standardize key neural network features to make development and innovation easier and more productive. To explore this topic, The Next Platform was joined by Neil Trevett. Listen to the podcast and read the write up.
NNEF and ONNX are two similar open formats to represent and interchange neural networks among deep learning frameworks and inference engines. At the core, both formats are based on a collection of often used operations from which networks can be built. Because of the similar goals of ONNX and NNEF, we often get asked for insights into what the differences are between the two. Read the Khronos blog to learn more about the similarities and differences between NNEF and ONNX.
Imagination Technologies announces the PowerVR CLDNN SDK for developing neural network applications on PowerVR GPUs. The neural network SDK makes it easy for developers to create Convolutional Neural Networks (CNNs) using PowerVR hardware. CLDNN sits on top of OpenCL making use of OpenCL constructs so it can be used alongside other custom OpenCL code. It uses standard OpenCL memory, so it can be used alongside standard OpenGL ES contexts. Learn more about CLDNN and download the SDK today.
Archintosh article covering the new Khronos NNEF 1.0 standard: "The Khronos Group is more than just about graphics standards like OpenGL and OpenCL. The consortium group has established Neural Network Exchange Format (NNEF) to help data scientists and engineers easily transfer trained networks." Khronos recently issued a press release for the release of the NNEF 1.0 Provisional Specification.
Electronic Design has posted an overview around the latest NNEF 1.0 release and comparing ONNX and NNEF. "Khronos began talking about the possibility of a standard to reduce the threat of fragmentation about three months before it was officially announced in October 2016. The concept came from Khronos member AImotive, an automotive start-up trying to sell an entire software stack for autonomous driving as well as the custom chips to run it."
Codeplay has a very good write-up today on machine alternatives that don't use Neural Networks. The included code, SYCL-ML was developed as a proof of concept to show what a machine learning application using heterogeneous computing can look like and has been published as an open source project. The project was developed using SYCL and ComputeCpp, which is an implementation of SYCL developed by Codeplay.
The Khronos Group announces the release of the Neural Network Exchange Format (NNEF™) 1.0 Provisional Specification for universal exchange of trained neural networks between training frameworks and inference engines. NNEF reduces machine learning deployment fragmentation by enabling a rich mix of neural network training tools and inference engines to be used by applications across a diverse range of devices and platforms. The release of NNEF 1.0 as a provisional specification enables feedback from the industry to be incorporated before the specification is finalized — comments and feedback are welcome on the NNEF GitHub repository.