NNEF tagged news

NNEF and ONNX are two similar open formats to represent and interchange neural networks among deep learning frameworks and inference engines. At the core, both formats are based on a collection of often used operations from which networks can be built. Because of the similar goals of ONNX and NNEF, we often get asked for insights into what the differences are between the two. Read the Khronos blog to learn more about the similarities and differences between NNEF and ONNX.

Imagination Announces Neural Network SDK which sits on OpenCLImagination Technologies announces the PowerVR CLDNN SDK for developing neural network applications on PowerVR GPUs. The neural network SDK makes it easy for developers to create Convolutional Neural Networks (CNNs) using PowerVR hardware. CLDNN sits on top of OpenCL making use of OpenCL constructs so it can be used alongside other custom OpenCL code. It uses standard OpenCL memory, so it can be used alongside standard OpenGL ES contexts. Learn more about CLDNN and download the SDK today.

Archintosh article covering the new Khronos NNEF 1.0 standard: "The Khronos Group is more than just about graphics standards like OpenGL and OpenCL. The consortium group has established Neural Network Exchange Format (NNEF) to help data scientists and engineers easily transfer trained networks." Khronos recently issued a press release for the release of the NNEF 1.0 Provisional Specification.

Electronic Design has posted an overview around the latest NNEF 1.0 release and comparing ONNX and NNEF. "Khronos began talking about the possibility of a standard to reduce the threat of fragmentation about three months before it was officially announced in October 2016. The concept came from Khronos member AImotive, an automotive start-up trying to sell an entire software stack for autonomous driving as well as the custom chips to run it."

Codeplay has a very good write-up today on machine alternatives that don't use Neural Networks. The included code, SYCL-ML was developed as a proof of concept to show what a machine learning application using heterogeneous computing can look like and has been published as an open source project. The project was developed using SYCL and ComputeCpp, which is an implementation of SYCL developed by Codeplay.

The Khronos Group announces the release of the Neural Network Exchange Format (NNEF™) 1.0 Provisional Specification for universal exchange of trained neural networks between training frameworks and inference engines. NNEF reduces machine learning deployment fragmentation by enabling a rich mix of neural network training tools and inference engines to be used by applications across a diverse range of devices and platforms. The release of NNEF 1.0 as a provisional specification enables feedback from the industry to be incorporated before the specification is finalized — comments and feedback are welcome on the NNEF GitHub repository.

Neil Trevett, Khronos Group President and Radhakrishna Giduthuri, Software Architecture and Compute Performance Acceleration at AMD, spoke at two Khronos related events this past week. Neils presented was an update on the Khronos Standards for Vision and Machine Learning which covered Khronos Standards OpenVX, NNEF, OpenCL, SYCL and Vulkan. Radhakrishna presented Standards for Neural Networks Acceleration and Deployment covered Khronos Standards OpenVX and NNEF. The slides from both presentations are now online.

Blog: Machine learning’s fragmentation problem — and the solution from Khronos, NNEFThere is a wide range of open-source deep learning training networks available today offering researchers and designers plenty of choice when they are setting up their project. Caffe, Tensorflow, Chainer, Theano, Caffe2, the list goes on and is getting longer all the time. This diversity is great for encouraging innovation, as the different approaches taken by the various frameworks make it possible to access a very wide range of capabilities, and, of course, to add functionality that’s then given back to the community. This helps to drive the virtuous cycle of innovation.

NNEF from Khronos will enable universal interoperability for machine learning developers and implementersThe Khronos Group is about to release a new standard method of moving trained neural networks among frameworks, and between frameworks and inference engines. The new standard is the Neural Network Exchange Format (NNEF™); it has been in design for over a year and will be available to the public by the end of 2017. Learn more about NNEF and the upcoming released in the Khronos blog post.

The Khronos Group will be holding a two hour tutorial at the Embedded Systems Conference '17 in December. Attendees will gain an understanding of the architecture of Khronos standards for computer vision and neural networks; getting fluent in actually using OpenVX and NNEF for real-time computer vision and neural network inference tasks.

Khronos member AImotive discusses their vision-first technology and NNEF. AImotive worked with the Khronos Working Group to create the new Neural Networking Exchange Format standard. NNEF is designed to simplify the process of using a tool to create a network and running that trained network on other toolkits or inference engines. Read more about this AImotive story.

Right now there is chaos in the AI tool segment: imagine if there were a dozen or more different Word document formats and opening one on your system was only a matter of luck? AI tools face a similar fate, which is why the NNEF is designed to bring transparency and order. The involvement of the Khronos Group was essential to bringing key market players together and to lay down a globally applicable standard. AImotive was the first company to initiate the NNEF working group within Khronos, and we serve as spec editor of the final format. Learn more about AIMotive and NNEF.