To further its goal of passing trained frameworks to embedded inference engines, the Khronos Group adds to its existing converters with two new bidirectional converters. Now available on the NNEF GitHub, these new tools enable easy conversion of trained models, including quantized models, between TensorFlow or Caffe2 formats and NNEF format.
In April, Khronos introduced the Safety Critical Advisory Forum was created in response to developers’ growing concerns and demands of functional safety standards on hardware and software. The advice and support that the forum provides to Khronos Working Groups directly contributes to the creation of SC APIs. Members and non-members can contribute in the forum, this post outlines the benefits of participation.
NNEF and ONNX are two similar open formats to represent and interchange neural networks among deep learning frameworks and inference engines. At the core, both formats are based on a collection of often used operations from which networks can be built. Because of the similar goals of ONNX and NNEF, we often get asked for insights into what the differences are between the two. Although Khronos has not been involved in the detailed design principles of ONNX, in this post we explain how we see the differences according to our understanding of the two projects. We welcome constructive discussion as the industry explores the need for neural network exchange and hope this post may be a constructive start to that conversation.