The Intel Compute Library for Deep Neural Networks (clDNN) is an open source performance library for Deep Learning (DL) applications intended for acceleration of DL inference on Intel® Processor Graphics (Intel® HD Graphics and Intel® Iris® and Intel® Iris® Pro). clDNN includes highly optimized building blocks to implement convolutional neural networks (CNN) with C and C++ interfaces. This library is also used in the Deep Learning Toolkit found in the Intel Computer Vision SDK Beta. The clDNN library can be accessed at github. To learn more on how to use clDNN see whitepaper
The Generic Graphics Library (GEGL) is best known as the backend for image processing software Gimp. GEGL is a graph based image processing framework that allows users to chain together image processing operations represented by nodes into a graph. It provides operations for loading and storing images, adjusting colors, filtering in different ways, transforming and compositing images. GEGL-OpenCL is an educational initiative that aims to get more developers to study and use OpenCL in their projects.
The Khronos Group announces the immediate availability of the finalized OpenCL™ 2.2 specification, incorporating industry feedback received from developers during the provisional specification review period. In addition to releasing the specification in final form, Khronos has, for the first time, released the full source of the specifications and conformance tests for OpenCL 2.2 onto GitHub to enable deeper community engagement. The conformance tests for OpenCL versions 1.2, 2.0 and 2.1 have also been released on GitHub with more open-source releases to follow. The Windsor Testing Framework, also released today, enables developers to quickly install and configure the OpenCL Conformance Test Suite on their own systems. Developers who know OpenCL C and plan to port their kernels to OpenCL C++, the OpenCL C to OpenCL C++ Porting Guidelines have been released.
Imagination Technologies announces the first GPU IP core based on its new PowerVR Furian architecture, the Series8XT GT8525. Says Tatiana Solokhina, CTO, RnD Center ELVEES, a Khronos member: “As a provider of SoCs for a wide range of global video analytics applications, we require a GPU that offers the best compute performance in a power constrained footprint. The new PowerVR Furian 8XT family from Imagination provides us an industry-leading GPU with new ALU for increased performance density and efficiency. In addition, support for standard compute APIs such as OpenVX enables easy implementation of real world vision processing applications.” Furian is designed to address the increasing compute requirements across multiple applications and market segments with efficient use of compute APIs including OpenCL 2.0, Vulkan 1.0 and OpenVX 1.1.
Come to this accessible talk on the state of the industry for the layperson and enthusiast, and hear from Khronos, the industry consortium that produces the open standards driving this revolution. Khronos members such as Intel, Xilinx, Huawei, AMD, NVIDIA, and Codeplay are here in Toronto for the International Workshop on OpenCL (IWOCL) at the University of Toronto May 16 – 18, 2017. This event is free to attend and open to the public, but RSVP is required to ensure seating is available. Register online to attend this free event open to the public.
Five years ago The International Workshop on OpenCL (IWOCL – “eye-wok-ul”) started as a small OpenCL-focused conference. In 2017 it has grown to three full days filled with tutorials, talks, posters and many technical discussions. You’ll hear attendees (and yourself) saying, “I did not know this was going on and I should have known it before.” It is a great place to learn the latest on OpenCL. Learn more about the history of IWOCL and the upcoming IWOCL event May 16-18, 2017 in Toronto, Canada.
VeriSilicon Holdings Co., Ltd. announces VIP8000, a highly scalable and programmable processor for computer vision and artificial intelligence. It delivers over 3 Tera MACs per second, with power consumption more efficient than 1.5 GMAC/second/mW and the smallest silicon area in industry with 16FF process technology. The VIP8000 can directly import neural networks generated by popular deep learning frameworks, such as Caffe and TensorFlow and neural networks can be integrated to other computer vision functions using the OpenVX framework. The processor is programmed by OpenCL or OpenVX with a unified programming model across the hardware units, including customer application-specific hardware acceleration units. Learn more about the VIP8000.
This week at the Embedded Vision Summit (EVS) in California Imagination is showcasing their latest Convolutional Neural Network (CNN) object recognition demo. All of these networks have been implemented using Imagination’s own DNN library. IMG DNN sits on top of OpenCL but doesn’t obscure it, and makes use of OpenCL constructs so it can be used alongside other custom OpenCL code. Imagination’s Paul Brasnett is talking at EVS on the subject of ‘Training CNNs for Efficient Inference‘ and for further reading, take a look at this CNN based number recognition demo, which uses OpenVX with CNN extension. Learn more about Imagination’s Convolutional Neural Networks.
The Intel Computer Vision SDK Beta is for developing and deploying vision-oriented solutions on platforms from Intel, including autonomous vehicles, digital surveillance cameras, robotics, and mixed-reality headsets. Based on OpenVX, this SDK offers many useful extensions and supports heterogeneous execution across CPU and SoC accelerators using an advanced graph compiler, optimized and developer-created kernels, and design and analysis tools. It also includes deep-learning tools that unleash inference performance on deep-learning deployment. If the functionality you need is not already available in the supplied library, you can create custom kernels using C, C++, or OpenCL kernels.