Skip to main content

Khronos Blog

Want to learn how graphics vendors will use the ANARI API to enable Rendering Technologies? ANARI Webinar + Q&A

The Khronos ANARI™ (Analytic Rendering Interface) working group is defining an open, royalty-free API standard for cross-vendor access to state-of-the-art rendering engines. ANARI will enable experts in domains such as scientific visualization to leverage the latest rendering techniques without needing to use low-level rendering APIs. Graphics vendors will use the ANARI API to enable visualization engines, libraries, and applications with portable access to a diverse range of rendering technologies, including ray tracing.

Recently ANARI Working Group members shared an update on their progress towards an initial specification at a Khronos Webinar. In case you missed it - or would just like a recap - the recording and slides are posted for public access. The audience had great questions, so we recapped the discussion here.

Have additional questions? Please leave them in the comments.

The ANARI members answering questions are:

  • Peter Messmer, ANARI Working Group Chair and Senior Manager HPC Visualization at NVIDIA
  • Jefferson Amstutz, Software Engineer at Intel
  • John Stone, Senior Research Programmer at the University of Illinois and a developer for the VMD molecular modelling and visualization program.

The questions and answers have been edited for brevity and clarity and reordered into a more logical flow for easier reading. Enjoy!

Question: How is ANARI not the same as a scene graph API? You need to order your objects and control them in any visualization app, so how do you deal with that in the context of ANARI?

Jefferson Amstutz: This is a very large question, so I’m going to do my best to bring it down to the core things that distinguish ANARI from a scene graph. ANARI is actually a subset of what a typical scene graph should solve. ANARI is focused purely on rendering and not providing other non-rendering things that a scene graph would be responsible for: For example, file I/O loaders to and from disk of various file formats and data transformations or data filtering. So, in things like the Visualization Toolkit from Kitware (VTK), a lot of it is built around transforming data from one form to another without rendering involved.

Also, things like UI bindings, app-specific metadata, and being able to display the structure of the scene are all operations outside of the scope of ANARI, as they are squarely aimed at being customized per-application.

Another distinction comes from distributed applications that have distributed nodes needing to “talk” to themselves (like scientific HPC simulations), which is again, not rendering related. ANARI aims to view data coming from a simulation so that it can be rendered (even distributed data) and not aimed at being a framework that someone would use to implement the simulation itself.

Furthermore, object update queries, where, if you modify an object that you have created for ANARI, you are also the one that knows that you need to update it, thus you are also the best person to track that information for other non-rendering related reasons. So, if you sum all those together, that’s a lot of things that then ANARI is not trying to create a universal solution for. There are a number of great packages out there that solve those problems they just do it all differently and typically are tailored for specific domains. Once you get to just the rendering portion, the rendering needs look largely the same: that’s what we’re trying to standardize.

John Stone: I would just add an example to illustrate this, historically VMD has always had to use its own internal scene graph because the scene graphs that are in the public like OpenSceneGraph and so on are not well suited for molecular visualization. It's powerful for ANARI to focus purely on rendering and leave it to the application to decide what scene graph or whatever data structures it prefers to use to drive ANARI rendering.

Question: How do you place ANARI on the ‘Portability vs Performance’ scale?

Peter Messmer: ANARI is about fast rendering and enabling people with live visualizations, so performance is one of the foremost topics. However, being able to take advantage of modern hardware from whatever vendor is another key element. The goal is to enable implementations using diverse backends - all of which offer solid performance for common workloads across multiple platforms. As we are a subset of the scene graph, you don’t need to copy anything into the ANARI data structures. ANARI can live with the data structures which were given to it by your scene graph, helping to avoid unnecessary data movement. There are trade-offs, but the idea is that we are providing a good implementation by default with all the different backends, but that there can be backend specific extensions allowing you to tweak performance even more if you wish.

Question: Could someone with a customized GL/SL shader use that in ANARI renderer?

Jefferson Anstutz: This can certainly be provided by a backend as an extension but is unlikely to ever be standardized because it dictates details too low-level for ANARI. For instance, we probably wouldn’t implement this in OSPRay, being a CPU ray tracer that doesn’t have anything to do with OpenGL or any GL/SL underneath the hood. However, if someone were using OpenGL and wanted to get very specific with programming my internal implementation details, you could certainly provide a string of GL/SL source (at your own risk). The idea is that ANARI is specifying a standard interface for the interaction with a renderer and the application and is avoiding dictating renderer implementation details. We want to avoid specifying the core features that every implementation has to provide to be compliant, yet leave the door wide open for renderer-extensions with interesting things like accepting GL/SL source, OSL source, MaterialX, node graphs - all kinds of stuff. Our goal is to handle all that without inventing tons and tons of API calls to make that happen.

Question: You haven't mentioned Microsoft DirectX®. To be fully compatible, a visualization developer would want to know if they can use DirectX on Windows. Is the DirectX team involved in ANARI?

Jefferson Amstutz: ANARI is defining an interface to rendering implementations where any lower-level graphics APIs can be used - it’s entirely up to the implementer of that particular ANARI backend. For us, DirectX would be a great candidate for someone to implement an ANARI implementation with, but the ANARI API itself is not at all tied to any of these lower-level graphics APIs. There could be one or multiple DirectX implementations. So, just like we have an OpenGL implementation, there could be other OpenGL implementations that have different subtle tradeoffs. Same thing with using (Intel) Embree or (NVIDIA) OptiX underneath the hood, these aren’t mutually exclusive. There could be multiple OptiX, Embree, DirectX or Vulkan backends. It all depends on who is writing a renderer. The underlying API is really just an implementation detail.

Question: If the goal of ANARI is essentially to render a single frame, how are time series going to be supported?

Jefferson Amstutz: There are multiple answers here because there are different ways time series come about. You’ve seen in OSPRay’s public demos over the years a lot of sciviz cases we’ve dealt with time series is that every time step is its own separate world. Those worlds could share a common geometry but, usually, if you have a time series of a volume CFD simulation, every time step ends up just being its own standalone world. So, for each frame, you’re selecting through that series of worlds for which time step you want to render at one time. There are other situations where you could be updating data in place; for example, for a particle simulation using spheres, you could update all the sphere positions in place. That has different tradeoffs than trying to create copies of that data, so you can go through time that way, but the idea is all of these are something we want to support. However, individual worlds in SciViz tend to be the most common, but it’s certainly not the only way that time series can be represented.

John Stone: You could imagine if you were doing something like city scale visualization. You had a car driving through the city, there is an opportunity for some data reuse there or keeping large amounts of the world shared across multiple time series. That’s something we definitely care about. One of the other things we’ve been working on is figuring out how we would interface ANARI to other types of unusual rendering systems where we might need to hash time series through various tricks in a rendering backend and that's something that people are exploring right now to figure out how they would do that for some of these special rendering systems.

Question: Will ANARI have bindings for C++, Fortran, Python?

Jefferson Amstutz: We think C is the best starting point for binding to other languages. We don’t have an explicit goal to enumerate all possible languages, but we found in practice using other C-based APIs that things like bindings to scripting languages or things like Fortran is easiest done when you’re working with a C API.

Peter Messmer: If you are writing in the Python ecosystem or your focus is mostly on the Fortran side and you want to get a high-performance renderer that can take advantage of arbitrary backend rendering hardware, that’s a place where ANARI can solve a lot of problems for you. However, this is why we have an open call for participation. We want to hear from you. What are the problems you are trying to solve? How could ANARI address the type of workloads that you are interested in?

Question: Can software based on the current ANARI headers be developed in the open?

Peter Messmer: We want to enable that as soon as possible but we are early in the development process. It is a work in progress and something that can change at any time - probably too soon to be a pre-release of the standard. On the other hand, we are very interested in seeing people starting to use it and give feedback based upon the tangible artifacts that are already being produced. We will release those headers as soon as we can, but it will be as a work in progress for feedback and the final standard might look completely different - we’re more than happy for people to start using it. We’re even happier if they also come and join the ANARI Working Group and start participating to actively influence the way that we are defining this interface.

John Stone: To give you an example, I have had ANARI implemented in VMD for a while. If you’re going to be using something that’s in flux, you should be expecting to periodically have to rewrite some code or at least do a global search and replace lines frequently. You should join the Working Group. It’s not that hard to become a member (learn how here ).

Question: What are the exceptions for the unidirectional flow? Picking objects?

Jefferson Amstutz: Yes, right now it's picking. The only other one would be being able to query world bounds from objects in the world. Both of those are not locked in as required right now.

Question: Is there an ANARI backend for ARM?

John Stone: Not yet, but some of us do have access to some bleeding-edge ARM hardware and that should be fun to see in the coming future.

Peter Messmer: It should also be mentioned that we do have an OpenGL backend, so it should work on an ARM platform that has OpenGL support.

Question: Will ANARI work on mobile?

Jefferson Amstutz: There are currently no active mobile users. We’ve identified issues of assuming 64 bit addressing which, on mobile, would be the highest chance that you would be in a 32-bit or less environment. That said, if there is a vendor targeting mobile, ANARI should work just fine.

John Stone: Tomorrow’s cell phone is going to be as powerful as today’s desktop workstation, so it seems inevitable that it will happen.

The ANARI webinar is available in its entirety here.

To learn more about the Khronos ANARI Working Group, please visit our web pages.

To learn more details about how to get involved with Khronos, either visit the membership page or contact Membership Services at .(JavaScript must be enabled to view this email address).

Comments