We've collected some of the more interesting items from this event for you:
The Embedded Vision Summit West 2014 provides a unique opportunity for engineers to learn about the hottest technology in the electronics industry—embedded computer vision—which enables “machines that see and understand.” This event will include:
President The Khronos Group
The OpenVX Hardware Acceleration API for Embedded Vision Applications and Libraries
This presentation will introduce OpenVX, a new application programming interface (API) from the Khronos Group. OpenVX enables performance and power optimized vision algorithms for use cases such as face, body and gesture tracking, smart video surveillance, automatic driver assistance systems, object and scene reconstruction, augmented reality, visual inspection, robotics and more. OpenVX enables significant implementation innovation while maintaining a consistent API for developers. OpenVX can be used directly by applications or to accelerate higher-level middleware with platform portability. OpenVX complements the popular OpenCV open source vision library that is often used for application prototyping.
|9:00AM-3:00PM||BDTi||A BDTI Technical Training Workshop:
Implementing Embedded Vision and Computer Vision: An Introduction
Covering Processors, Sensors, Algorithms, and Development with OpenCV and OpenCL
"Self-Driving Cars" by Nathaniel Fairfield, Google
Self-driving cars have the potential to transform how we move: they promise to make us safer, give freedom to millions of people who can't drive, and give people back their time. The Google Self-Driving Car project was created to rapidly advance autonomous driving technology and build on previous research. For the past four years, Google has been working to make cars that drive reliably on many types of roads, using lasers, cameras, and radar, together with a detailed map of the world. Fairfield will describe how Google leverages maps to assist with challenging perception problems such as detecting traffic lights, and how the different sensors can be used to complement each other. Google's self-driving cars have now traveled more than a half a million miles autonomously. In this talk, Fairfield will discuss Google's overall approach to solving the driving problem, the capabilities of the car, the company's progress so far, and the remaining challenges to be resolved.
"Computer Vision Powered by Heterogeneous System Architecture (HSA)" by Harris Gasparakis, AMD
We will review the HSA vision and its current incarnation though OpenCL 2.0, and discuss its relevance and advantages for Computer Vision applications. HSA unifies CPU cores, GPU compute units, and auxiliary co-processors (such as an ISP, DSP, and Video Codecs) on the same die. It enables all IPs to have a unified and coherent view of system memory and enables concurrent processing, allowing the most suitable IP to be used for each vision pipeline task. We will elucidate this concept with examples (such as multi-resolution optical flow and adaptive deep learning networks) and live demos. Finally, we will describe the transparent integration of OpenCL in OpenCV, soon to be released in OpenCV 3.0, first conceived and evangelized in the community by the author.
"Implementing Histogram of Oriented Gradients on a Parallel Vision Processor" by Marco Jacobs, videantis
Object detection in images is one of the core problems in computer vision. The Histogram of Oriented Gradients method (Dalal and Triggs 2005) is a key algorithm for object detection, and has been used in automotive, security and many other applications. In this presentation we will give an overview of the algorithm and show how it can be implemented in real-time on a high-performance, low-cost, and low-power parallel vision processor. We will demonstrate the standard OpenCV based HOG with Linear SVM for Human/Pedestrian detection on VGA sequences in real-time. The SVM Vectors used are provided with OpenCV, learned from the Daimler Pedestrian Detection Benchmark Dataset and the INRIA Person Dataset.
"Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality" by Simon Morris, CogniVue
Augmented reality (AR) applications are based on accurately computing a camera’s 6 degrees of freedom (6DOF) position in 3-dimensional space, also known as its “pose”. In vision-based approaches to AR, the most common and basic approach to determine a camera’s pose is with known fiducial markers (typically square, black and white patterns that encode information about the required graphic overlay). The position of the known marker is used along with camera calibration to accurately overlay the 3D graphics. In marker-less AR, the problem of finding the camera pose requires significantly more complex and sophisticated algorithms, e.g. disparity mapping, feature detection, optical flow, and object classification. This presentation compares and contrasts the typical algorithmic processing flow and processor loading for both marker-based and marker-less AR. Processing loading and power requirements are discussed in terms of the constraints associated with mobile platforms.
"Challenges in Object Detection on Embedded Devices" by Adar Paz, CEVA
As more products ship with integrated cameras, there is an increased potential for computer vision (CV) to enable innovation. For instance, CV can tackle the “scene understanding” problem by first figuring out what the various objects in the scene are. Such "object detection" capability holds big promise for embedded devices in mobile, automotive, and surveillance markets. However, performing real-time object detection while meeting a strict power budget remains a challenge on existing processors. In this session, we will analyze the trade-offs of various object detection, feature extraction and feature matching algorithms, their suitability for embedded vision processing and recommend methods for efficient implementation in a power- and budget-constrained embedded device.
"Taming the Beast: Performance and Energy Optimization Across Embedded Feature Detection and Tracking" by Chris Rowen, Cadence
We will look at a cross-section of advanced feature detectors, and consider the algorithm, bit precision, arithmetic primitives and implementation optimizations that yield high pixel processing rates, high result quality and low energy. We will also examine how these optimization methods apply to kernels used in tracking applications, including fast connected component labeling. From this we will derive general principles on the priority and likely impact of different optimization types.
"Programming Novel Recognition Algorithms on Heterogeneous Architectures" by Kees Vissers, Xilinx
The combination of heterogeneous systems, consisting of processors and FPGA, is a high-performance implementation platform for image and vision processing. One of the significant hurdles in leveraging the compute potential was the inherent low-level of programming with RTL for the FPGA part and connecting RTL blocks to processors. Novel complete software environments are now available that support algorithm development, programming exclusively in C/C++ and OpenCL. We will show examples of relevant novel vision and recognition algorithms for Zynq based devices, with a complete platform abstraction of any RTL design, High-Level Synthesis interconnect, or processor low level drivers.We will show the outstanding system level performance and power consumption of a number of applications programmed on these devices.
Conference Code of Conduct: The Khronos Group is dedicated to providing a harassment-free conference experience for everyone. Visit our Code of Conduct page to learn more.