Skip to main content

Tutorials tagged news

Andreas Süßenbach, a senior software developer at NVIDIA, has posted a new developer tutorial on compile-time errors vs runtime errors with Vulkan-hpp. If you are using Vulkan, there are a few ways to create runtime errors. Even with the great validation layers available with Vulkan, you must run that part of the code to detect such errors. When you use Vulkan-hpp, a header-only C++ binding for the Vulkan API, some of those runtime errors are turned into compile-time errors. Read this tutorial to learn more.

NVIDIA recently published Ray Tracing Gems, a deep-dive into best practices for real-time ray tracing. The book was made free-to-download, to help all developers embrace the bleeding edge of rendering technology. Ray Tracing Essentials is a seven-part video series hosted by the editor of Ray Tracing Gems, NVIDIA’s Eric Haines. The aim of this program is to make developers aware of various terms and concepts used in the field and with Vulkan, while also inspiring them with dramatic and beautiful uses of the technology. This post is about the fourth video in this series, the Ray Tracing Pipeline.

​Vulkan Working Group member James Jones, Principal Software Engineer at NVIDIA, has written a Khronos Blog on using Vulkan Timeline Semaphores in Vulkan 1.2. The new timeline semaphore synchronization API shipping as VK_KHR_timeline_semaphore, and a core feature of Vulkan 1.2, defines a primitive containing a superset of both the original VkSemaphore and VkFence primitives, while simultaneously eliminating many of the most painful limitations of the previous APIs. Learn how this new feature in Vulkan 1.2 works with code examples.

The focus of this NVIDIA tutorial and the provided code is to showcase a basic integration of ray tracing within an existing Vulkan sample, using the VK_NV_ray_tracing extension. Note that for educational purposes all the code is contained in a very small set of files. A real integration would require additional levels of abstraction.

If you’re coding 3D visualizations with Three.js, sooner or later you’ll want to move beyond using the library’s basic native shapes to using complex custom 3D shapes wrapped in UV mapped material. Since Three.js dropped support of its Blender exporter plugin in favor of relying on its glTF loader, exporting via the popular and versatile glTF file format is now the way to go for bringing Blender objects into your Three.js projects. This tutorial will walk through each step from creating a Three.js-compatible UV-wrapped 3D object in Blender to loading the object into a Three.js scene.

This mini-tutorial presents a simple use-case the new VK_EXT_conditional_rendering extension. With the new VK_EXT_conditional_rendering extension, Vulkan gains the possibility to execute certain rendering and dispatch commands conditionally, based on values stored in a dedicated buffer. So instead of having to rebuild command buffers if the visibility of objects change it’s now to possible to just change a single buffer value and the to control if the rendering commands for that object are executed without the need to touch any command buffers.

The 52nd tutorial from ogldev.org on rendering a triangle in Vulkan has been published. In the previous tutorial we learned how to clear the window and were introduced to a couple of Vulkan entities that are key parts of that operation - the swap chain and the command buffer. This will require the introduction of four new Vulkan entities - the image view, render pass, framebuffer and the pipeline.

Magic Leap released a handful of tutorials and assets files that will help developers get a head-start in creating mixed reality content on Magic Leap One. Magic Leap said that Unity and Unreal already offer optimizations for Magic Leap hardware. The headset has full support for OpenGL 4.5 and OpenGL ES 3.1, but Magic Leap recommends building applications with the Vulkan API for the best performance.

The curriculum for the 2018 OpenVX Workshop at the Embedded Vision Summit in May has been finalized. The Khronos Group will be presenting a day-long hands-on workshop all about OpenVX cross-platform neural network acceleration API for embedded vision applications. Khronos has developed a new curriculum making this a do-not-miss tutorial with new information on computer vision algorithms for feature tracking and neural networks mapped to the graph API. The tutorials will be presented by speakers from Khronos member companies AMD, Axis Communications, Cadence and Codeplay. There will be hands-on practice sessions with the folks who created the OpenVX API to give participants a chance to solve real computer vision problems. Discussions will also include the OpenVX roadmap and what’s to come. Registration is now open but space is limited, so be sure not to wait too long.