Skip to main content

Nnef tagged news

The Khronos Group today announced the creation of two standardization initiatives to address the growing industry interest in the deployment and acceleration of neural network technology. Firstly, Khronos has formed a new working group to create an API independent standard file format for exchanging deep learning data between training systems and inference engines. Work on generating requirements and detailed design proposals for the Neural Network Exchange Format (NNEF™) is already underway, and companies interested in participating are welcome to join Khronos for a voice and a vote in the development process. Secondly, the OpenVX™ working group has released an extension to enable Convolutional Neural Network topologies to be represented as OpenVX graphs and mixed with traditional vision functions. Read the press release about both of these Neural Network Standard Initiatives.

The deep learning speech recognition acceleration solution leverages an Altera Arria 10 FPGA, iFLYTEK’s deep neural network (DNN) recognition algorithms and Inspur’s FPGA-based DNN parallel design, migration and optimization with OpenCL. The solution has a hardware platform in CPU+Arria 10 FPGA heterogeneous architecture and software in a high-level programming model in OpenCL to enable migration from CPU to FPGAs.

Inspur Group and the FPGA chipmaker Altera today launched a speech recognition acceleration solution based on Altera’s Arria 10 FPGAs and DNN algorithm from iFLYTEK, an intelligent speech technology provider in China, at SC15 conference in Austin, Texas. The deep learning speech recognition acceleration solution leverages an Altera Arria 10 FPGA, iFLYTEK’s deep neural network (DNN) recognition algorithms and Inspur’s FPGA-based DNN parallel design, migration and optimization with OpenCL.