NNEF tagged news

Khronos member AImotive discusses their vision-first technology and NNEF. AImotive worked with the Khronos Working Group to create the new Neural Networking Exchange Format standard. NNEF is designed to simplify the process of using a tool to create a network and running that trained network on other toolkits or inference engines. Read more about this AImotive story.

Right now there is chaos in the AI tool segment: imagine if there were a dozen or more different Word document formats and opening one on your system was only a matter of luck? AI tools face a similar fate, which is why the NNEF is designed to bring transparency and order. The involvement of the Khronos Group was essential to bringing key market players together and to lay down a globally applicable standard. AImotive was the first company to initiate the NNEF working group within Khronos, and we serve as spec editor of the final format. Learn more about AIMotive and NNEF.

Neural Networks Need Some Translation - Invitation to Khronos NNEF Advisory PanelKhronos is readying a standard interchange format to map training frameworks to inference engines. The Neural Network Exchange Format (NNEF) is an open, scalable transfer format that allows engineers to move trained networks from any framework that supports the cross-vendor format into any inference engine that can read it. It’s a sort of PDF for neural networks. Khronos is extending an invitation to data scientists and engineers to take part in an NNEF advisory panel, especially people working on non-standard and novel network inferencing architectures. Participation does not require a Khronos membership and will give interested companies and individuals an opportunity to contribute and provide feedback to this important work.

Don’t miss this year’s OpenVX Workshop at Embedded Vision Summit. Khronos will present a day-long hands-on workshop all about OpenVX cross-platform neural network acceleration API for embedded vision applications. We’ve developed a new curriculum so even if you attended in past years, this is a do-not-miss, jam-packed tutorial with new information on computer vision algorithms for feature tracking and neural networks mapped to the graph API. We’ll be doing a hands-on practice session that gives participants a chance to solve real computer vision problems using OpenVX with the folks who created the API. We’ll also be talking about the OpenVX roadmap and what’s to come.

FotoNation Limited and VeriSilicon Holdings Co., Ltd have entered into an agreement to jointly develop a next generation image processing platform that offers best-in-class programmability, power, performance and area for computer vision (CV), computational imaging (CI) and deep learning. The market-ready IP platform, named IPU 2.0, will be available for customer license and design in the first quarter of 2017. IPU 2.0 offers a unified programing environment and pre-integrated imaging features for a wide range of applications across surveillance, automotive, mobile, IoT and more. IPU 2.0 will use open initiatives such as OpenVX and OpenCL.

The Khronos Group today announced the creation of two standardization initiatives to address the growing industry interest in the deployment and acceleration of neural network technology. Firstly, Khronos has formed a new working group to create an API independent standard file format for exchanging deep learning data between training systems and inference engines. Work on generating requirements and detailed design proposals for the Neural Network Exchange Format (NNEF™) is already underway, and companies interested in participating are welcome to join Khronos for a voice and a vote in the development process. Secondly, the OpenVX™ working group has released an extension to enable Convolutional Neural Network topologies to be represented as OpenVX graphs and mixed with traditional vision functions. Read the press release about both of these Neural Network Standard Initiatives.

The deep learning speech recognition acceleration solution leverages an Altera Arria 10 FPGA, iFLYTEK’s deep neural network (DNN) recognition algorithms and Inspur’s FPGA-based DNN parallel design, migration and optimization with OpenCL. The solution has a hardware platform in CPU+Arria 10 FPGA heterogeneous architecture and software in a high-level programming model in OpenCL to enable migration from CPU to FPGAs.

Inspur Group and the FPGA chipmaker Altera today launched a speech recognition acceleration solution based on Altera's Arria 10 FPGAs and DNN algorithm from iFLYTEK, an intelligent speech technology provider in China, at SC15 conference in Austin, Texas. The deep learning speech recognition acceleration solution leverages an Altera Arria 10 FPGA, iFLYTEK's deep neural network (DNN) recognition algorithms and Inspur's FPGA-based DNN parallel design, migration and optimization with OpenCL.