For the past two years, Holochip has been working on light field technology for the US Navy’s Aegis program. The program calls for a table top light field display that can accommodate horizontal and vertical real-time parallax. In October 2020, the team working on OpenXR™ at Holochip released an open source Vulkan® example project and started work with light field display technology using the OpenXR API. As a result of both efforts, Holochip has discovered a method of light field real-time rendering that is built upon the Khronos Group’s Vulkan Ray Tracing extensions.
Theory | Background
A light field is properly defined as a vector function that describes the amount of light traversing every point in every direction in a given space. A full and complete light field representation of a virtual object has no difference in appearance from a real physical object. It is fair to imagine a light field would be what Sci-Fi stories have in mind when they show a holographic story room on a spaceship or an ancient distant civilization planning a battle against an empire. These fantastical creations are very similar in nature to what a light field can one day conceivably produce.
While light field rendering is currently a long way away from the holodeck, light field rendering displays are quite usable today to achieve a visible volume of a virtual rendered scene that a viewer can naturally see and even photograph from multiple perspectives. However, these state of the art light fields all come with an enormous cost in rendering complexity. After all, in order for a perspective to be visible, it must first be rendered.
3D stereo rendering without the glasses
Stereo vision, or rendering from two perspectives, is how VR is intended to work. Each scene must be rendered from two unique perspectives that simulate where the left and right eyes of the headset wearer would see. Both perspectives are presented to the user, allowing them to naturally experience depth of a scene.
Light field displays render these three dimensional images without the need for glasses by presenting multiple perspectives simultaneously. For example, the Looking Glass display is capable of presenting a surface mounted horizontal-only light field, showing a stereoscopic image from any horizontal position for around 12 individual viewers. To achieve this impressive display, each scene must be rendered 45 separate times to cover 45 separate view frustums.
To allow for full parallax in both the horizontal and vertical directions, rendering requirements increase exponentially. If you want viewers to be able to walk around the display, what starts as 45 views to render a three dimensional light field along only the horizontal axis grows to thousands of views to allow for vertical perspectives. Achieving this full parallax display would make it possible to display a light field image on a table top, or to experience it by entering a CAVE. This achievement requires rendering thousands of view frustums simultaneously.
Rendering that many views in real time is prohibitive using even the latest hardware. Even if it were possible to find hardware capable of generating all of those views, then there would be the problem of transmitting the rendered image to the display surfaces.
How do light fields work?There are many types of light field displays out there. Each carries the similar problem of exponential viewport rendering required to render a scene from every direction.
Current light field displays consist of a layer of optics overlayed on a 2D display. This configuration causes each pixel to emit light as a tightly constrained ray. Thus, by illuminating a set of pixels, a set of rays can be emitted by the 3D display which reproduces a particular 3D scene. The end goal for the resulting light field display is to produce a radiance image on a 2D LCD display that will traverse through the optics.
How can users interact with a light field?
The OpenXR API allows for actions to describe how input will work within a 3D scene. OpenXR has a well defined capability of interacting with 3D objects, but well-established user interface conventions exist only for 2D devices; 3D user interface conventions are actively being developed and researched. The interactions already available in an OpenXR action definition are exactly what a 3D renderer such as a light field would need. Holochip has developed an extension plugin to Collabora’s open source Monado OpenXR runtime which enables light field rendering and input from any actions compatible system, including gesture recognition.
Where does Holochip come in?
Holochip discovered a technique using ray traced rendering that challenges the way that CG light field rendering is designed. The virtual camera that describes a scene in a light field display of many views doesn’t have to start from a single location in space. That discovery led to the realization that, using the Vulkan Ray Tracing extensions, rendering costs for full parallax displays could be significantly reduced. As a result of this insight, Holochip has created an API layered over Vulkan to generate the requisite number of views for full parallax at the same rendering cost as producing a singular view for the same display.
Vulkan Ray Tracing is fundamental to our technology as this new standard enables varying the location of the camera while maintaining the number of rays cast into the scene. That number will remain equivalent no matter where the ray originates.
Full Ray Tracing capabilities are maintained
Thanks to the Vulkan Ray Tracing extensions, real time full parallax light field rendering is now not only possible but on equal footing with traditional real time rendering. If a scene can be rendered using traditional ray tracing, the same scene may be rendered as a light field with nearly identical hardware requirements.