Khronos Announces VR Standards Initiative

Industry call to define common Virtual Reality APIs

December 6th 2016 – SIGGRAPH Asia - Macau – The Khronos Group, an open consortium of leading hardware and software companies, today announced a call for participation in a new initiative to define a cross-vendor, royalty-free, open standard for access to modern virtual reality (VR) devices.

The rapid growth of the virtual reality market has led to platform fragmentation, forcing VR applications to be ported and customized to run on multiple VR runtimes, and requiring GPUs and displays to support multiple driver interfaces. This fragmentation prevents the creation of VR experiences that can run across multiple platforms, creating added expense for developers wishing to support multiple VR devices.  This situation may also result in sub-optimal display driver optimizations.

Key components of the new standard will include APIs for tracking of headsets, controllers and other objects, and for rendering to a diverse set of display hardware. This standard will enable applications to be portable to any VR system that conforms to the Khronos standard, significantly enhancing the end-user experience, and driving widespread availability of content to spur further growth in the VR market.

Fast-paced work on detailed proposals and designs will start after an initial exploratory phase to define the standard’s scope and key objectives. Any company interested to participate is strongly encouraged to join Khronos for a voice and a vote in the development process. Design contributions from any member are welcome. More information on this initiative and joining the Khronos Group is available at www.khronos.org/vr.

Industry Support

“The virtual reality industry has garnered massive attention and investment, resulting in validation of virtual reality technology. We believe continued growth will require standardization and AMD supports the Khronos initiative for an open standard,” said Daryl Sartain, director and worldwide head of VR at AMD.

“Virtual reality is driving the graphics industry forward with user experiences becoming so compelling they can transform visual computing for people at home, work and in their leisure time,” said Jakub Lamik, VP Product Marketing, Media Processing Group, ARM. “Success, and scaling of this market, will be accelerated by industry standards and Khronos is a pioneering leader in this area that we support fully.”

“With VR on the verge of rapid growth across all of the major platform families, this new Khronos open standards initiative is very timely. We at Epic Games will wholeheartedly contribute to the effort, and we'll adopt and support the resulting API in Unreal Engine,” said Tim Sweeney, founder & CEO, Epic Games.

“NVIDIA is excited to see the industry come together around an open standard for VR,” said Jason Paul, general manager for virtual reality at NVIDIA. “NVIDIA is fully engaged at Khronos on building a new standard that drives wider adoption and cross-platform content for VR.”

“Khronos’ open APIs have been immensely valuable to the industry, balancing the forces of differentiation and innovation against gratuitous vendor incompatibility. As virtual reality matures and the essential capabilities become clear in practice, a cooperatively developed open standard API is a natural and important milestone. Oculus is happy to contribute to this effort,” said John Carmack, CTO, Oculus VR.

“Virtual reality’s success is dependent on a large thriving market of hardware where casual and professional consumers alike can take their pick without worry of fragmentation and incompatibility,” said Christopher Mitchell, OSVR business lead, Razer. “This has been OSVR’s vision from day one and we are thrilled to be a part of the Khronos Group in order to push standardization of interfaces across the industry.”

“As a market leader in Eye tracking, Tobii has invested heavily in developing technologies for Eye tracking in VR and welcomes this VR standardization initiative at Khronos,” said Johan Hellqvist, vice president - Products and Integration, Tobii. “Foveated rendering and Gaze interaction is key for the VR experience and Khronos efforts in standardizing APIs for developers focusing on VR will ensure proliferation of content and richness of the VR ecosystem.”

“The number of VR systems on the market is growing rapidly. Most of these require separate API support from the developer, which is causing huge fragmentation for consumers,” said Gabe Newell of Valve. “Khronos’ work on a standard API to enable applications to target a wide variety of VR devices is an important step to counter that trend.”

“VR is a complex amalgam of almost everything in the modern day pixel pushing pipeline from powerful GPUs to machine vision processing to advanced display controller technology,” said Weijin Dai, executive vice president and general manager of VeriSilicon's Intellectual Property Division. “As a significant provider of these technologies we are thrilled that Khronos has taken on the task of creating a comprehensive VR standard and we intend to support this effort fully.”

“Open standards which allow developers to more easily create compelling, cross platform experiences will help bring the magic of VR to everyone. We look forward to working with our industry colleagues on this initiative,” said Mike Jazayeri, director product management, Google VR.

About The Khronos Group

The Khronos Group is an industry consortium creating open standards to enable the authoring and acceleration of parallel computing, graphics, vision and neural nets on a wide variety of platforms and devices. Khronos standards include Vulkan™, OpenGL®, OpenGL® ES, OpenGL® SC, WebGL™, SPIR-V™, OpenCL™, SYCL™, OpenVX™, NNEF™, COLLADA™, and glTF™. Khronos members are enabled to contribute to the development of Khronos specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge accelerated platforms and applications through early access to specification drafts and conformance tests.

###

Khronos, Vulkan, DevU, SPIR, SPIR-V, SYCL, WebGL, WebCL, COLLADA, OpenKODE, OpenVG, OpenVX, EGL, glTF, OpenKCAM, StreamInput, OpenWF, OpenSL ES, NNEF and OpenMAX are trademarks of the Khronos Group Inc. ASTC is a trademark of ARM Holdings PLC, OpenCL is a trademark of Apple Inc. and OpenGL is a registered trademark and the OpenGL ES and OpenGL SC logos are trademarks of Silicon Graphics International used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.

AImotive Introduces Level 5 Self-Driving Automotive Technology Powered By Artificial Intelligence To U.S. Market

Formerly Known as AdasWorks, AImotive Is First to Enable an AI Ecosystem for Autonomous Driving Regardless of Location, Driving Style or Driving Conditions

MOUNTAIN VIEW, Calif. and BUDAPEST November 14, 2016 – AdasWorks, the leader in AI-powered motion, today announced the company has changed its name to AImotive (www.aimotive.com), effective immediately. AImotive delivers a full stack technology solution and the most powerful Artificial Intelligence software for the automotive industry, designed to provide self-driving vehicles better safety, improved comfort and increased productivity. The new company name reflects the broader vision of AImotive to bring global accessibility to self- driving vehicles, faster and safer than any other company in the world. The AImotive product suite delivers the robust technology required to operate self-driving vehicles in all conditions, and adapts in real-time to different driving styles and cultures. In addition to the name change, AImotive also announced its expansion into the U.S. market with the opening of a new office in Mountain View, California.

“AImotive has experienced phenomenal growth since our launch in 2015, growing from 15 engineers to more than 120 researchers and developers, including 16 PhDs,” said Laszlo Kishonti, CEO and founder of AImotive. “Our team of AI and machine learning experts approach the driving experience from a global perspective, and recognize the need to create a quality experience across diverse climates, driving styles and cultures. Our new name, AImotive, allows us to better highlight the unique capabilities we bring to autonomous driving, specifically a more affordable, accessible self-driving solution capable of powering L5 vehicles safely in any condition.”

AImotive offers the only L5 architecture capable of robust scalability across the global market, utilizing cameras as the primary sensors for greater affordability and accessibility. Unlike other solutions on the market, AImotive's full stack software does not require a mandatory chip, and it uses the power of artificial intelligence to “see” fine detail and predict behavior, making it easier to manage common driving concerns such as poor visibility and adverse conditions. AImotive's training technique is also scalable with a real-time simulator tool that trains the AI for a wide variety of traffic scenarios and weather conditions.

The World’s First AI Ecosystem for Autonomous Driving with a Customizable Software Framework

AImotive provides a full spectrum of automated driving functionality, enabling OEMs to move faster and more efficiently into fully autonomous car production. The AImotive suite of products includes:

  • aiDrive: A full technology stack comprised of a Recognition Engine, Location Engine, Motion Engine and Control Engine. Recognition Engine is a continuously learning engine that combines and analyses sensor data with AImotive’s pixel-precise segmentation tool to recognize up to 100 different object classes such as pedestrians, bicycles, animals, buildings and obstacles. Location Engine provides a globally-scalable solution for precise self-localization of the vehicle by not requiring HD maps for precise positioning but using 3D landmark point data on top of conventional GPS positioning. Motion Engine enables real-time tracking of moving objects, predicting future speed, location and behavior, allowing for optimal routing of the car even in emergency situations. Control Engine is the execution component that manages acceleration, braking, steering, gear shifting, etc. as well as auxiliary functions such as turn signals, headlights and the car horn.
  • aiKit: Incorporates a complete ecosystem of tools that accelerate the training of the AI by enabling faster data collection and annotation, complete testing with real-time simulation of various driving conditions and providing validated over-the-air updates for maximum travel safety.
  • aiWare: Performance-efficient hardware design for automotive embedded solutions, offering low power consumption, high bandwidth and low latency Neural Network (NN) computation. While the goal of aiWare is to help bring truly efficient AI hardware to the market quicker, aiDrive itself is designed to be processor agnostic, allowing for seamless integration with GPU, FPGA or embedded technology based systems.

Launched in 2015, Aimotive has raised $10.5 million USD in seed and Series A funding to date, from investors Robert Bosch Venture Capital, Nvidia, Inventure Oy, Draper Associates, Day One Capital Fund Management and Tamares.

About AImotive

AImotive (www.AImotive.com) is primed to bring global accessibility to self-driving vehicles, faster and safer than any other company in the world. As the first to deliver AI-powered enablement of Level 5 self-driving vehicles, AImotive provides OEMs the necessary global scalability, accessibility and safety to rapidly meet the needs of billions of people all over the world. The company’s suite of products, including aiDrive, aiKit and aiWare, deliver the robust technology required to operate self-driving vehicles in all conditions, and capable of adapting in real- time to different driving styles and cultures. AImotive is committed to improving the lives of people worldwide with an affordable, safe and comfortable self-driving environment.

AImotive is a privately held company, headquartered in Budapest with offices in Mountain View, California. Follow @AI_motive on Twitter, like us on Facebook, or learn more at www.AImotive.com.

Xenko game engine announces Vulkan graphics API implementation

Tokyo, Japan, (October 14, 2016) – Silicon Studio is pleased to announce that we have implemented Vulkan in our cross-platform open source game engine, Xenko. Vulkan is the next generation graphics API from the Khronos Group, and we are excited to be one of the first commercially available game engines to support Vulkan!

Vulkan is a new, low-overhead, 3D graphics application programming interface (API) which is now a public standard providing high-efficiency, cross-platform access to modern graphic processing units (GPU) used in a wide variety of devices. Vulkan has been designed to work on PCs, consoles, mobile devices and embedded platforms. It mostly targets realtime 3D graphics applications such as videogames, offering higher performance and lower central processing unit (CPU) usage. In addition to enabling lower CPU usage, Vulkan also reduces power consumption and is able to better distribute work amongst multiple CPU cores.

Vulkan Xenko

According to Mr. Terada, CEO of Silicon Studio, “It was straightforward for our graphics team to implement Vulkan, thanks to the architecture of our powerful, cross-platform game engine, Xenko. Our internal graphics system is well-suited to the new Vulkan API. Xenko has been built to support multi-threaded processing from its inception. We believe this is the first game engine of its kind to be fully Vulkan enabled from the start.”

Adds Neil Trevett, president of Khronos, "Khronos is excited about the rapidly growing support for Vulkan in the industry and is pleased to see Silicon Studio's Xenko, join the growing list of game engines that support Vulkan. We're eagerly looking forward to the first official Xenko release with Vulkan support."

###

Xenko is a trademark of Silicon Studio Corporation.
All other names and trademarks mentioned are the registered trademarks and property of the respective companies.

Khronos Launches Dual Neural Network Standard Initiatives

Industry Call for Participation in new Neural Network Exchange Format working group; OpenVX standard for vision processing releases Neural Network extension

October 4th 2016 – San Francisco, CA – The Khronos Group, an open consortium of leading hardware and software companies, today announced the creation of two standardization initiatives to address the growing industry interest in the deployment and acceleration of neural network technology. Firstly, Khronos has formed a new working group to create an API independent standard file format for exchanging deep learning data between training systems and inference engines. Work on generating requirements and detailed design proposals for the Neural Network Exchange Format (NNEF™) is already underway, and companies interested in participating are welcome to join Khronos for a voice and a vote in the development process. Secondly, the OpenVX™ working group has released an extension to enable Convolutional Neural Network topologies to be represented as OpenVX graphs and mixed with traditional vision functions.

Neural network technology has seen recent explosive progress in solving pattern matching tasks in computer vision such as object recognition, face identification, image search, and image to text, and is also playing a key part in enabling driver assistance and autonomous driving systems. Convolutional Neural Networks (CNN) are computationally intensive, and so many companies are actively developing mobile and embedded processor architectures to accelerate neural network-based inferencing at high speed and low power. As a result of such rapid progress, the market for embedded neural network processing is in danger of fragmenting, creating barriers for developers seeking to configure and accelerate inferencing engines across multiple platforms.

About the Neural Network Exchange Format (NNEF)
Today, most neural network toolkits and inference engines use proprietary formats to describe the trained network parameters, making it necessary to construct many proprietary importers and exporters to enable a trained network to be executed across multiple inference engines. The Khronos Neural Network Exchange Format (NNEF) is designed to simplify the process of using a tool to create a network, and running that trained network on other toolkits or inference engines. This can reduce deployment friction and encourage a richer mix of cross-platform deep learning tools, engines and applications.

The NNEF standard encapsulates neural network structure, data formats, commonly used operations (such as convolution, pooling, normalization, etc.) and formal network semantics. This enables the essentials of a trained network to be reliably exported and imported across tools and engines. NNEF is purely a data interchange format and deliberately does not prescribe how an exported network has been trained, or how an imported network is to be executed. This ensures that the data format does not hinder innovation and competition in this rapidly evolving domain. More information on the NNEF initiative is available at the NNEF Home Page.

About the OpenVX Neural Network Extension
The OpenVX Neural Network extension specifies an architecture for executing CNN-based inference in OpenVX graphs. The extension defines a multi-dimensional tensor object data structure which can be used to connect neural network layers, represented as OpenVX nodes, to create flexible CNN topologies. OpenVX neural network layer types include convolution, pooling, fully connected, normalization, soft-max and activation – with nine different activation functions. The extension enables neural network inferencing to be mixed with traditional vision processing operations in the same OpenVX graph.

Today, OpenVX has also released an Import/Export extension that complements the Neural Network extension by defining an API to import and export OpenVX objects, such as traditional computer vision nodes, data objects of a graph or partial graph, and CNN objects including network weights and biases or complete networks.

The high-level abstraction of OpenVX enables implementers to accelerate a dataflow graph of vision functions across a diverse array of hardware and software acceleration platforms. The inclusion of neural network inferencing functionality in OpenVX enables the same portable, processor-independent expression of functionality with significant freedom and flexibility in how that inferencing is actually accelerated. The OpenVX Neural Network extension is released in provisional form to enable developers and implementers to provide feedback before finalization and industry feedback is welcomed at the OpenVX Forums. More details on OpenVX and the new extensions can be found at the OpenVX Home Page.

Khronos is coordinating its neural network activities, and expects that NNEF files will be able to represent all aspects of an OpenVX neural network graph, and that OpenVX will enable import of network topologies via NNEF files through the Import/Export extension, once the NEFF format definition is complete.

Industry Support
“AdasWorks initiated the creation of the NNEF working group as we saw the growing need for platform-independent neural network-based software solutions in the autonomous driving space. We cooperate closely with chip companies to help them build low-power, high-performance neural network hardware and believe firmly that an industry standard, which works across multiple platforms, will be beneficial for the whole market. We are happy to see numerous companies joining the initiative,” said Laszlo Kishonti, founder and CEO of AdasWorks.

“AMD fully supports the development of open standards, currently being the only company with an open source version of OpenVX. We support the creation of OpenVX extensions and data formats related to Neural Networks such as CNN in computer vision and related applications,” said Mike Mantor, corporate fellow and CTO, Radeon Technologies Group, AMD.

“Cadence has been investing heavily in tools for OpenVX and CNN programming to accelerate adoption of our market-leading Tensilica Vision DSPs,” said Dino Bekis, vice president of product marketing for the IP Group at Cadence. “Khronos’ efforts to standardize a universal CNN description exchange format will speed the availability of universal tools for converting trained CNNs to the inference domain. The extensions to OpenVX graph descriptions will enable more seamless deployment of both imaging and vision algorithms in deeply embedded devices.”

“As CNNs are becoming key to vision processing, Imagination is delighted to participate in Khronos’ neural net initiatives. Our PowerVR GPUs have supported OpenVX since its inception and we’ve already demonstrated CNNs running on PowerVR GPUs. The extension of OpenVX to support CNNs will provide a framework to make it easy for our customers to deploy vision applications using CNNs on new and existing PowerVR based SoCs,” said Chris Longstaff, Senior Director of Product and Technology Marketing, PowerVR, Imagination Technologies.

"Intel supports and welcomes the adoption of OpenVX and the OpenVX Neural Network Extension as an important element in proliferating computer vision deep learning usage models," said Ron Friedman, Intel Corporate vice president and general manager of IP Blocks and Technologies. "Khronos OpenVX Neural Network Extension brings algorithms tuned for deep learning to the embedded computer vision and machine intelligence hardware devices."

“We see increasingly more real life problems getting solved with neural network technologies”, said Victor Erukhimov, CEO of Itseez3D, Inc. and the chair of the OpenVX working group. “Efficient implementation of neural networks inference on embedded devices will enable a wide variety of applications for mobile phones, AR/VR and automotive safety.”

“We have seen a significant increase in the use of neural nets across a broad range of applications including vision processing for ADAS and financial market prediction. The introduction of Khronos APIs in this domain is a significant step towards standardization, bringing the technology to a wider developer community. Mobica is excited to be working with Khronos and other partners on this technological advance,” said Mobica's CTO, Jim Carroll.

“As an active working group member and one of the earliest OpenVX adopters, VeriSilicon is excited to see Khronos extend its support to deep learning and neural networks,” said Shanghung Lin, Vice President for Vision and Image Product Development at VeriSilicon. “Programmability and inter-operability between vision functions and the Neural Net extension makes OpenVX a perfect programming interface for VeriSilicon’s VIP8000 ultra-low-power, scalable vision processor solution, which combines neural network engines, OpenVX optimized shader programming engines, and a special interconnect logic called tensor processing fabric to allow collaborative computing for vision and neural net technology. VeriSilicon looks forward to participating in the Khronos NEFF working group to bridge the disparate market of deep learning frameworks and toolkits. A simple and standard neural net format is imperative to facilitate users choosing their favorite training tools and deploying the trained network to different inference engines in different applications.”

About The Khronos Group
The Khronos Group is an industry consortium creating open standards to enable the authoring and acceleration of parallel computing, graphics, vision and neural nets on a wide variety of platforms and devices. Khronos standards include Vulkan™, OpenGL®, OpenGL® ES, OpenGL® SC, WebGL™, OpenCL™, SPIR™, SPIR-V™, SYCL™, WebCL™, OpenVX™, EGL™, COLLADA™, and glTF™. Khronos members are enabled to contribute to the development of Khronos specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge accelerated platforms and applications through early access to specification drafts and conformance tests.

###

Khronos, Vulkan, DevU, SPIR, SPIR-V, SYCL, WebGL, WebCL, COLLADA, OpenKODE, OpenVG, OpenVX, EGL, glTF, OpenKCAM, StreamInput, OpenWF, OpenSL ES, NNEF and OpenMAX are trademarks of the Khronos Group Inc. ASTC is a trademark of ARM Holdings PLC, OpenCL is a trademark of Apple Inc. and OpenGL is a registered trademark and the OpenGL ES and OpenGL SC logos are trademarks of Silicon Graphics International used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.

Early Access to the SYCL Open Standard for C++ Acceleration

A new generation of technology is coming for low power devices with embedded intelligence. At Codeplay, we believe that the full performance of today's highly parallel devices should not be vendor-locked to one platform but should be available to all programmers using open standards.

To help steer this evolution, Codeplay is giving developers free, early access to ComputeCpp™ with a pre-conformance beta implementation of the SYCL™ open standard, along with an open-source preview of the latest Parallel Technical Specification to be adopted into C++17. Other open-source projects being made available are VisionCpp, a machine vision library demonstrating C++ techniques for performance-portability, and an early version of the Eigen C++ library that uses SYCL for acceleration on OpenCL devices. Eigen is used in projects like TensorFlow.

The SYCL standard provides acceleration using OpenCL™ devices meaning you can accelerate your C++ software on a wide range of platforms into many market segments such as automotive ADAS, cloud compute, IoT vision and cellular. This leads to better performance portability at the higher layer for Machine Vision, Neural Networks, and Deep Learning.

The SYCL specification from the Khronos™ Group is for developers who want to take software written using C++ single-source programming models such as CUDA® or C++AMP, and port to a wide range of OpenCL™devices. The current early-access ComputeCpp Community Edition Beta release provides pre-conformance SYCL v1.2 support for AMD® and Intel® OpenCL GPUs and CPUs. Further operating system and device support is on its way. More information on the specification from Khronos is available here: https://www.khronos.org/sycl

“SYCL is designed to not lock in your software development to one platform” said Andrew Richards, CEO of Codeplay. “Our ComputeCpp Community Edition Beta will let developers work on acceleration of open-source C++ software such as TensorFlow, Eigen and the C++ 17 Parallel STL.”

"SYCL is one of the strongest basis from which the ISO C++ Standard will adapt future C++ support for heterogeneous computing, and it is one of the few candidates that is designed from the base up to work with C++ templates and abstractions for low power and low-latency massive parallel dispatch" Michael Wong, VP R&D at Codeplay, Chair of C++ Standard SG14 on Low Latency and SG5 on Transactional Memory, ISOCPP VP and Director

To further stimulate the SYCL community, Codeplay has also setup http://sycl.tech where anyone can post projects, videos, news or tutorials. This site is for everything related to SYCL whether it uses ComputeCpp or not.

Codeplay is presenting and available for discussions this week at AutoSens (http://auto-sens.com/) and CppCon (http://cppcon.org/) and will also be at the British Machine Vision Conference 2016, in York, UK. To download ComputeCpp Community Edition Beta or contact Codeplay, visit https://codeplay.com/

###

Khronos, SPIR and SYCL are trademarks of the Khronos Group Inc. OpenCL is a trademark of Apple Inc. used under license by Khronos. CUDA is a trademark of NVIDIA Corporation. Intel is a trademark of Intel Corporation. AMD is a trademark of Advanced Micro Devices, Inc.

Khronos Establishes Advisory Panel to Create Design Guidelines for Safety Critical APIs

Targeted at markets such as automotive, robotics and avionics;
Open to Khronos members and invited experts

July 26th 2016 – SIGGRAPH, Anaheim, CA – The Khronos Group, an open consortium of leading hardware and software companies, today announced the formation of a Safety Critical Advisory Panel to create guidelines for the design of safety critical graphics, compute and vision processing APIs. The Safety Critical Advisory Panel will be open to both Khronos members and invited experts from the industry. Markets such as Advanced Driver Assistance Systems (ADAS), autonomous vehicles, robotics and avionics increasingly need advanced acceleration APIs that are designed to provide reliable operation and enable system safety certification. The guidelines will be openly published and adopted as part of Khronos’ proven API design process. Experienced practitioners in the field of safety critical system design are invited to apply for Advisory Panel membership, at no cost, with more details at the Khronos Safety Critical working group page.

“Visual computing acceleration will be a vital component of many emerging safety critical markets, and so the industry needs a new generation of hardware APIs that enable access to advanced silicon capabilities in certifiable systems,” said Neil Trevett, president of the Khronos Group and vice president at NVIDIA.  “The Safety Critical Advisory Panel will build on the experience of creating the new generation OpenGL SC 2.0 API, plus we are inviting industry experts to assist in creating pragmatic guidelines to enable effective safety critical API design - both within Khronos and throughout the industry.”

In April 2016, Khronos released the OpenGL SC 2.0 API specification to address the unique and stringent requirements of high reliability display system markets, including FAA DO-178C and EASA ED-12C Level A for avionics, and ISO 26262 safety standards for automotive. OpenGL SC 2.0 enables high reliability system manufacturers to take advantage of modern graphics programmable shader engines while still achieving the highest levels of safety certification. Khronos expects that several additional Khronos working groups, including Vulkan, OpenCL and OpenVX will adopt the safety critical guidelines when designing future APIs that will enable similar levels of certification.

About The Khronos Group

The Khronos Group is an industry consortium creating open standards to enable the authoring and acceleration of parallel computing, graphics, vision, sensor processing and dynamic media on a wide variety of platforms and devices. Khronos standards include Vulkan™, OpenGL®, OpenGL® ES, WebGL™, OpenCL™, SPIR™, SPIR-V™, SYCL™, WebCL™, OpenVX™, EGL™, COLLADA™, and glTF™. All Khronos members are enabled to contribute to the development of Khronos specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge media platforms and applications through early access to specification drafts and conformance tests.

###

Khronos, Vulkan, DevU, SPIR, SPIR-V, SYCL, WebGL, WebCL, COLLADA, OpenKODE, OpenVG, OpenVX, EGL, glTF, OpenKCAM, StreamInput, OpenWF, OpenSL ES and OpenMAX are trademarks of the Khronos Group Inc. ASTC is a trademark of ARM Holdings PLC, OpenCL is a trademark of Apple Inc. and OpenGL is a registered trademark and the OpenGL ES and OpenGL SC logos are trademarks of Silicon Graphics International used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.

Khronos Showcases Significant glTF Momentum for Efficient Transmission of 3D Scenes and Models

Open source glTF Validator and glTF 1.0.1 Specification released;
glTF MIME Type Approved by IANA; Multiple importers and translators available

July 22nd 2016 – Web3D Conference, Anaheim, CA – The Khronos™ Group, an open consortium of leading hardware and software companies, today announced significant momentum behind the glTF™ (GL Transmission Format) royalty-free specification for the transmission and loading of 3D content. Since the launch of glTF 1.0 in September 2015, Khronos has released an open source glTF validator, commenced community review of the glTF 1.0.1 specification that incorporates industry feedback for enhanced interoperability, successfully registered glTF as a MIME type with IANA and has catalyzed a growing array of importers, translators and tools supporting the glTF standard. More information on glTF specifications and activities is available on the Khronos website.

“The world has long needed an efficient, usable standard for 3D scenes that sits at the level of common image, audio, video, and text formats. Not an authoring format, or necessarily a format you would use for a hyper optimized platform specific application, but something at home on the internet, capable of being directly created and consumed by many different applications,” said John Carmack, CTO of Oculus.

glTF is a vendor- and runtime-neutral asset delivery format that minimizes the size of 3D scenes and models, and optimizes runtime processing by interactive 3D applications using WebGL™ and other APIs. glTF creates a common publishing format for 3D content tools and services, analogous to the JPEG format for images. The format combines an easily parsable JSON scene and material description, which references binary geometry, textures, materials and animations. glTF is extensible to handle diverse use cases and already available extensions include binary scene descriptions and high precision rendering for geospatial applications.

glTF 1.0.1 tightens specification specificity to aid in asset validation to facilitate a robust and interoperable ecosystem. The changes include minor updates to corner cases for accessors, buffers, techniques, and other glTF properties. The released draft is for community review and will be finalized after implementation and industry feedback. For details and discussion on glTF 1.0.1, see the GitHub project page for glTF 1.0.1 discussions.

“glTF has been embraced by the industry as it fills a real and growing need to bring 3D assets quickly and efficiently to a wide variety of platforms and devices. Fast growing industries such as Augmented and Virtual Reality will use the foundation of a widely accepted 3D format to enable seamless content distribution and end-user experiences” said Neil Trevett, president of the Khronos Group and vice president at NVIDIA and chair of the Khronos 3D Formats Working Group.

The new glTF Validator is an open source, cross-platform tool that analyses whether a glTF 1.0.1 asset is valid according to the spec and schema, and if it isn't - what is invalid. The glTF Validator will be critical to interoperability between tools and applications as it can be used to ensure all glTF assets are correctly formed. The glTF Validator is available today as a command line tool and a drag and drop validator web front-end tool, with a client-side JavaScript library coming soon. Source and more details can be found at GitHub page for the glTF Validator.

‘MIME types’ are used to identify the type of information that a file contains. Khronos’ successful registration of glTF as a MIME type at the Internet Assigned Numbers Authority (IANA) is a significant step in ensuring that glTF files may be reliably and correctly identified and recognized across diverse markets and ecosystems. Previous MIME types include image/jpeg, audio/mpeg, and video/mp4 – the new model/gltf+json MIME type finally recognizes 3D as widely usable class of content.

The glTF specification is being openly developed with the specification and source of multiple converters and loaders freely available on GitHub. Since glTF’s launch, the amount of industry support has grown significantly to include:

  • Direct export from tools such as Blender;
  • Translators from diverse formats such as FBX, COLLADA, OBJ, and OpenStreetMap;
  • Support in the Open Asset Import Library (Assimp);
  • Direct import into engines including three.js, Microsoft’s Babylon.js, Cesium, X3DOM, xeoEngine, PEX and the A-Frame framework for WebVR;
  • A community-generated glTF Reference Card by Marco Hutter.

More details are on the GitHub project page for glTF tools.

Work is already underway to evolve and expand glTF’s capabilities. Extensions in development include sophisticated streaming of very large 3D CAD models from Fraunhofer IGD and advanced 3D mesh compression using 3DGC technology from the MPEG Consortium. Potential future core specification features include definition of physically based rendered (PBR) materials, morph targets and support for the upcoming WebGL 2.0 standard. Anyone is welcome to join the discussion on the GitHub project page for glTF.

“glTF is the result of a multi-year effort to design an open, interoperable format for sharing 3D graphics. The level of community effort and industry adoption we have seen in the few months since its initial ratification show the huge promise of an open format for sharing 3D everywhere,” said Tony Parisi, virtual reality pioneer and co-editor of the glTF specification.

glTF at Web3D and SIGGRAPH Conferences 22-28 July, Anaheim, CA
There are multiple presentations and demonstrations showcasing WebGL, glTF and other Khronos APIs between July 22nd-28th at the Web3D and SIGGRAPH Conferences in Anaheim, CA.

Industry Support for glTF

“glTF adds standardization and web portability for OpenGL-based viewing and processing tools, which overall makes sharing immersive digital experiences much easier,” said Stefano Corazza, senior principal scientist at Adobe.

“The Augmented Reality for Enterprise Alliance (AREA) congratulates the Khronos group on the launch of glTF. The increasing momentum and acceptance of glTF is another important step in the development of the AR in Enterprise ecosystem and wider 3D industries,’ said Mark Sage, Executive Director of AREA.

“Unlocking 3D content from proprietary desktop applications to the cloud creates massive new opportunities for collaboration. Designers can share their work much earlier in the process, makers can show what their objects will look like before being printed, educators can incorporate interactive elements to the courses they produce, and much more. This future is so close we can feel it - the hardware is capable, the browsers are capable, now if only we could solve the content pipeline. Having an interoperable standard for tools manufacturers and engine developers to work against is a huge step - go glTF!,” said Ross McKegney , Platform @ Box.

“glTF has become the foundation for 3D geospatial visualization on the web, from SmartCities to flight simulators, and is a core component of 3D Tiles for streaming massive models,” said Patrick Cozzi, Principal Graphics Architect, Cesium.

“With the growing computational power of modern graphic cards and better approximations, physically-based rendering (PBR) are becoming an exciting trend in real time graphics. Now, the researchers of Fraunhofer IGD are bringing this new trend to the web — with glTF! The main goal behind PBR is to follow real physical laws, so materials will look accurate and consistent in all lighting conditions without changing an immense list of parameters and settings. glTF is the container for this new kind of web technologies,” said Johannes Behr, head of competence center Visual Computing System Technologies at Fraunhofer IGD.

“We clearly see a big momentum about glTF in the Babylon.js community. This is why we keep improving our glTF loader to be sure to respond to our user needs,” said David Catuhe, principal program manager at Microsoft and author of babylon.js.

“OTOY believes glTF will become the industry standard for compact and efficient 3D mesh transmission, much as JPEG has been for images. To that end, glTF, in tandem with Open Shader Language, will become core components in the ORBX scene interchange format, and fully supported in over 24 content creation tools and game engines powered by OctaneRender,” said Jules Urbach, Founder & CEO of OTOY.

"Web3D Consortium members look forward to continuing progress between Fraunhofer IGD's Shape Resource Container (SRC) compression and progressive-mesh streaming as an essential application of glTF capabilities. SRC is already a central aspect of Extensible 3D (X3D) Graphics evolution for the Web. 3D Printing and 3D Scanning are opening up further domains for common improvement," said Don Brutzman, X3D working group co-chair.

About The Khronos Group

The Khronos Group is an industry consortium creating open standards to enable the authoring and acceleration of parallel computing, graphics, vision, sensor processing and dynamic media on a wide variety of platforms and devices. Khronos standards include Vulkan™, OpenGL®, OpenGL® ES, WebGL™, OpenCL™, SPIR™, SPIR-V™, SYCL™, WebCL™, OpenVX™, EGL™, COLLADA™, and glTF™. All Khronos members are enabled to contribute to the development of Khronos specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge media platforms and applications through early access to specification drafts and conformance tests.

###

Khronos, Vulkan, DevU, SPIR, SPIR-V, SYCL, WebGL, WebCL, COLLADA, OpenKODE, OpenVG, OpenVX, EGL, glTF, OpenKCAM, StreamInput, OpenWF, OpenSL ES and OpenMAX are trademarks of the Khronos Group Inc. ASTC is a trademark of ARM Holdings PLC, OpenCL is a trademark of Apple Inc. and OpenGL is a registered trademark and the OpenGL ES and OpenGL SC logos are trademarks of Silicon Graphics International used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.

The Future is Parallel: Exclusive OpenCL and Parallel Computing Training Offered in Sunnyvale and Toronto

YetiWare Inc., offers unique opportunity for professional software developers to learn innovative techniques focused on OpenCL and parallel data processing.

TORONTO (PRWEB) JULY 07, 2016, Sunnyvale, For the first time, YetiWare Inc., is inviting software developers to attend three days of specialized training in OpenCL and parallel data processing, previously only available to corporate clients.

OpenCL, the future of parallel data processing, is supported by all major processor companies and leveraged by major corporations to reduce operational costs and obtain peak software performance. Despite the importance of parallel programming and OpenCL, there are few educational opportunities available for professional software developers interested in advancing their skills and programming abilities.

The YetiWare program is being offered at two locations, in either Toronto, Canada or Sunnyvale, California:

Toronto (downtown location):

Sunnyvale, California:

*Note: The optional review day is available at either location for those developers who want to brush-up their skills to prepare in advance for the three-day core course.

Training sessions are led by AJ Guillon. AJ is a subject matter expert and contributor to the Khronos OpenCL specification, developed by major software and processor companies (including Intel, AMD, Apple, Adobe, NVIDIA, and many others) to provide a common way to write parallel software that works on any processor, including a CPU, GPU, or FPGA.

"The traditional thinking has been that parallel programming is difficult to learn. But AJ and his team have put together clear and effective course material that takes participants from learning fundamentals to delivering real life, relevant applications quickly. As a result, participants are able to reap the benefits from the course quickly," says I-Cheng Chen, P.Eng., Fellow, Platform Architecture, Advanced Micro Devices Inc. "The use of moderated lab sessions and opportunity to expand on existing applications makes it much more conducive for participants to make effective use of OpenCL in short order."

"Computer processor design has changed drastically in the past 20 years, yet most developers continue to write software according to a simplified model that results in poor application performance and disappointing user experiences," AJ Guillion tells his students. "Today, all major computer processors are parallel, and these parallel processors are in everything from smart phones, to desktop PCs, to supercomputers. Despite the ubiquity of parallel processors, few software developers know how to program them. The goals of this specialized training course is to give programmers the tools they need to do just that."

"AJ's course on OpenCL is a cut above other courses in that he brings with him tremendous background and experience, and delivers it in an easily understandable way that is reachable for the novice, but still remains interesting for experts. This is more than just a one-way presentation, but a true tutorial with labs and hands-on components that truly improves your skill in this domain using skills that are now highly in demand across many industries, making this course pay for itself," says Michael Wong, former CEO OpenMP, ISOCPP.org Director, VP, Research & Development, Codeplay Software. "He also makes it fun with anecdotes and entertaining stories."

For a full course description, fees and more information.

About YetiWare Inc

YetiWare's mission is to solve the big hairy parallel programming problem and to bring back software scaling with minimal programmer effort. Raw processing power in everything from mobile phones, to PCs, to supercomputers, continuously grows every year with increasingly more cores, wider SIMD lanes, and additional software accelerators such as GPUs and FPGAs. Today it is extremely complicated to write applications that scale so that consumers directly benefit by upgrading to the latest computer processors. YetiWare's mission is to provide that revolutionary solution to performance programming so that application developers can build faster and more scalable software.

YetiWare's initial product offerings target the data center to provide an efficient heterogeneous platform for cloud computing that will significantly reduce processor energy consumption, and thereby reduce operational costs. High-level libraries in specific application areas such as machine learning, scientific computing (HPC), business analytics, and big data will seamlessly integrate existing software with the YetiWare compute platform.

Check out our Founders YouTube videos for a free introduction to OpenCL training, over 25,000+ views of the channel.

Basemark Web 3.0 Launches

Basemark Web 3.0 is a brand new industry standard browser benchmark and the only one with WebGL™ 2.0

HELSINKI, Finland (June 9th, 2016) – Basemark, the developer of industry-standard benchmarks for performance and power consumption analysis, today launches Basemark Web 3.0 browser benchmarking tool. With the new tool, Basemark extends support from mobile devices and VR to all connected devices that run a modern web browser, such as laptops and desktop PCs.

The test scenarios cover everything from web real-time graphics using WebGL 1.0.2 and upcoming WebGL 2.0 to the most popular JavaScript frameworks used by the leading websites of the world. Furthermore, Basemark Web 3.0 includes a battery test, enabling power consumption measurements under web workloads.

With the tool, professional software and hardware developers – as well individual consumers – can measure the performance of their devices simply through the web browser. The results can then be saved to the Basemark PowerBoard where users can accurately compare the performance of different browsers across all devices.

In addition to developing the product in close cooperation with the industry heavyweight companies under its Benchmark Development Program,  Basemark has collaborated with the Khronos Group, the consortium behind the WebGL API, during the 12 months of development of the tool.

“Basemark has a long history in creating industry-leading benchmarks for mobile devices, virtual realitysystems and browsers. Basemark Web 3.0 is a major milestone in developing trustworthy benchmarks for modern browsers, including WebGL,” says Neil Trevett, President of the Khronos Group. “We particularly welcome Basemark’s support for the upcoming WebGL 2.0 standard that will demonstrate the powerful rendering capabilities and stunning visuals that can be delivered through the browser using this new generation API. ”, he adds.

Basemark’s founder Tero Sarkkinen comments: “Basemark Web 3.0 is an entirely new browser benchmark, replacing the previous BrowserMark which was starting to approach the end of its lifecycle. We are extremely happy to provide full support now for desktop and laptop devices too, and strongly believe paying a visit to the Basemark PowerBoard before making a buying decision is worth for any technology enthusiast who cares about the performance of their devices”.

Basemark Web 3.0 complements the company’s full line of professional benchmarking tools,  Basemark GPU Mobile, VRScore™ and the Power Assessment Tool (PAT).

About Basemark

Basemark develops industry-leading system performance and power consumption analysis tools that are used by leading semiconductor and OEM companies around the world such as AMD, Imagination Technologies, Intel, NVIDIA, Renesas and Qualcomm. Its world-renowned product portfolio includes VRScore,  Basemark GPU, Basemark ES, Basemark X, Basemark OS and Browsermark.  Basemark is headquartered in Helsinki, Finland. For more information,  please visit www.basemark.com 

Basemark,Browsermark and VRScore are registered trademarks of Basemark Oy. WebGL is a trademark of the Khronos Group Inc. All other mentioned brands may be property of their respective owners.

 

 

Khronos Releases OpenVX 1.1 Specification for High Performance, Low Power Computer Vision Acceleration

Expanded range of processing functions; Enhanced flexibility for data access and processing; Full conformance tests available; Safety Critical specification in development

May 2nd 2016 – Embedded Vision Summit, Santa Clara, CA – The Khronos Group, an open consortium of leading hardware and software companies, announces the immediate availability of the OpenVX™ 1.1 specification for cross platform acceleration of computer vision applications and libraries. OpenVX enables performance and power optimized computer vision algorithms for use cases such as face, body and gesture tracking, smart video surveillance, automatic driver assistance systems, object and scene reconstruction, augmented reality, visual inspection, robotics and more. Conformant OpenVX 1.0 implementations and tools are shipping from AMD, Imagination, Intel, NVIDIA, Synopsys and VeriSilicon. OpenVX 1.1 builds on this momentum by adding new processing functions for use cases such as computational photography, and enhances application control over how data is accessed and processed. An open source OpenVX 1.1 sample implementation and full conformance tests will be available in the first half of 2016. Details on the OpenVX specifications and Adopters Program are available at: www.khronos.org/openvx.

“More and more products are incorporating computer vision, and OpenVX addresses a critical need by making it easier for developers to harness heterogeneous processors for high performance, low power vision processing – without having to become processor experts,” said Jeff Bier, founder of the Embedded Vision Alliance.  “This is essential for enabling the widespread deployment of visual intelligence in devices and applications.”

The precisely defined specification and conformance tests for OpenVX make it ideal for deployment in production systems where cross-vendor consistency and reliability are essential. Additionally, OpenVX is easily extensible to enable nodes to be deployed to meet customer needs, ahead of being integrated into the core specification.

The new OpenVX 1.1 specification is a significant expansion in the breadth and flexibility of vision processing functionality and the OpenVX graph framework:

  • Definition and processing of Laplacian pyramids to support computational photography use cases;
  • Median, erode and dilate image filters, including custom patterns;
  • Easier and less error prone methods to read and write data to and from OpenVX objects; 
  • Targets - to control on which accelerator to run nodes in a heterogeneous device;
  • More convenient and flexible API for extending OpenVX with user kernels;
  • Many other improvements and clarifications to infrastructure functions and vision nodes.

“This is an important milestone towards widespread adoption of OpenVX in embedded platforms running computer vision algorithms,” said Victor Erukhimov, President, Itseez and chair of the OpenVX working group. “The new vision functions that we added enable exciting use cases, and refined infrastructure API gives developers more flexibility for creating advanced computer vision applications."

About OpenVX

OpenVX abstracts a vision processing execution and memory model at a much higher level than general compute frameworks such as OpenCL, enabling significant implementation innovation and efficient execution on a wide range of architectures while maintaining performance portability and a consistent vision acceleration API for application development. An OpenVX developer expresses a connected graph of vision nodes that an implementer can execute and optimize through a wide variety of techniques such as: acceleration on CPUs, GPUs, DSPs or dedicated hardware, compiler optimizations, node coalescing, and tiled execution to keep sections of processed images in local memories. This architectural agility enables OpenVX applications on a diversity of systems optimized for different levels of power and performance, including very battery-sensitive, vision-enabled, wearable displays.

Future Safety Critical Standards

Vision processing will be a vital component of many emerging safety critical market opportunities including Advanced Driver Assistance Systems (ADAS), autonomous vehicles and medical and process control applications. The OpenVX working group is developing OpenVX SC, a safety critical version of OpenVX for to address the unique and stringent requirements of these high reliability markets. The Safety Critical working group at Khronos is building on the experience of shipping the OpenGL® SC 2.0 specification for high reliability use of modern graphics programmable shader engines, and is developing cross-API guidelines to aid in the development of open technology standards for safety critical systems. Any interested company is welcome to join Khronos for a voice and a vote in these development processes.

OpenVX and Khronos APIs at Embedded Vision Summit, 2-4 May, Santa Clara, CA
There are multiple presentations and workshops related to OpenVX and other Khronos APIs on May 2nd-4th at the Embedded Vision Summit in Santa Clara, CA, including:

  • How Computer Vision Is Accelerating the Future of Virtual Reality at 3:30PM, Monday 2nd by AMD
  • NVIDIA VisionWorks, a Toolkit for Computer Vision using OpenVX at 3:15PM, Tuesday 3rd by NVIDIA
  • Using the OpenCL C Kernel Language for Embedded Vision Processors at 3:45PM, Tuesday 3rd by Synopsys
  • The Vision API Maze: Options and Trade-offs at 4:30PM, Tuesday 3rd by Khronos
  • Programming Embedded Vision Processors Using OpenVX at 5PM, Tuesday 3rd by Synopsys
  • Whole day hand-on workshop Accelerate Your Vision Applications with OpenVX on Wednesday 4th

Details about the Embedded Visions Summit are here: www.embedded-vision.com/summit and specific details on the Khronos full day OpenVX tutorial including speakers from AMD, Intel, Imagination, NVIDIA, Synopsys and TI are here:
http://www.embedded-vision.com/summit/accelerate-your-vision-applications-openvx.

Industry Support for OpenVX 1.1

“AMD fully supports OpenVX with our open source release,” said Raja Koduri, senior VP and chief architect, Radeon Technologies Group at AMD. “We have enabled computer vision developers with access to OpenVX on the entire range of PC platforms, from embedded APUs to high-end workstation GPUs and the fully open source access also facilitates developers to port OpenVX to other platforms based on AMD GCN architecture easily.”

“OpenVX can be a valuable starting point for accelerating creation and adoption of vision applications, and can enable easier access to vision applications in safety-critical areas such as automotive and factory automation,” said Chris Longstaff, director of business development, Imagination Technologies. “Imagination is supporting OpenVX, development of the OpenVX SC specification and inclusion of important new features such as computational neural networks, across our PowerVR GPUs and vision IP offerings. These processors are at the heart of many of the world’s mobile, automotive and embedded devices, providing developers with ideal platforms to develop vision applications.”

“Vision processing is increasingly important for a range of real world applications. It is a fundamental technology for advanced driver assist systems and gesture recognition as a method of user interaction,” said Mobica's CTO, Jim Carroll. “Mobica is excited to be working on the development of such applications and enabling acceleration technology for OpenVX 1.1 - we anticipate that it will be a fundamental technology for many aspects of next generation computing devices.”

“OpenVX is a vital component of the VisionWorks SDK on the Jetson embedded platform,” said Deepu Talla, vice president and general manager for Tegra at NVIDIA. “VisionWorks enables developers to quickly configure efficient GPU-based vision acceleration for their applications, and NVIDIA has extended the core OpenVX functionality to meet our customer’s needs.”

“As an early adopter of the OpenVX standard, VeriSilicon congratulates the Khronos Group on reaching this major milestone,” said Shanghung Lin, vice president for Vision Image Products at VeriSilicon. “Our customers have enthusiastically embraced OpenVX conformant solutions in our VIP (Vision Image Processor) line that being designed into silicon products for automotive, video surveillance and other IoT applications. OpenVX has been accelerating mass-market adoption of computer vision applications such as natural user interfaces, always-on cameras, and Automotive Driver Assistance Systems, and OpenVX 1.1 makes a significant step toward more flexible support for vision processing and computational photography. We are proud to support the OpenVX standard with our VIP, with a power/performance/area optimized architecture for novel vision processing use cases on mobile, home, automotive, and embedded platforms.” 

About The Khronos Group

The Khronos Group is an industry consortium creating open standards to enable the authoring and acceleration of parallel computing, graphics, vision, sensor processing and dynamic media on a wide variety of platforms and devices. Khronos standards include Vulkan™, OpenGL®, OpenGL® ES, WebGL™, OpenCL™, SPIR™, SPIR-V™, SYCL™, WebCL™, OpenVX™, EGL™, COLLADA™, and glTF™. All Khronos members are enabled to contribute to the development of Khronos specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge media platforms and applications through early access to specification drafts and conformance tests. More information is available at www.khronos.org.

###

Khronos, Vulkan, DevU, SPIR, SPIR-V, SYCL, WebGL, WebCL, COLLADA, OpenKODE, OpenVG, OpenVX, EGL, glTF, OpenKCAM, StreamInput, OpenWF, OpenSL ES and OpenMAX are trademarks of the Khronos Group Inc. ASTC is a trademark of ARM Holdings PLC, OpenCL is a trademark of Apple Inc. and OpenGL is a registered trademark and the OpenGL ES and OpenGL SC logos are trademarks of Silicon Graphics International used under license by Khronos. All other product names, trademarks, and/or company names are used solely for identification and belong to their respective owners.

safety