Difference between revisions of "Selecting a Shading Language"

From OpenGL Wiki
Jump to: navigation, search
m (Shading Language Considerations: Removing bad link.)
(Near total rewrite)
Line 1: Line 1:
== Overview ==
+
The various extensions and core revisions of the OpenGL API have led to the availability of a number of different potential [[Shading Language|shading languages]] that you may use. This page will deal with
  
Before we address this question, it's worthwhile to enumerate the choices you have:
+
== Main options ==
  
* [[GLSL]] - aka OpenGL Shading Language (aka GLSlang); a high-level shading language
+
While there are other possibilities, these are the three primary competitors in the OpenGL space:
* [[Cg]] - another high-level shading language, but by NVidia
 
* ARB/EXT or vendor-specific assembly profiles - low-level "assembly" shading languages
 
  
The OpenGL purist would of course state that you should ''always'' use [[GLSL]].  However, there are specific needs that might favor any one of these approaches.  So it's up to you to chose, but in the absence of any specific needs do favor [[GLSL]].
+
* [[OpenGL Shading Langauge]] (GLSL)
[[GLSL]] has been supported in the OpenGL core since OpenGL 2.0.
+
* [[Cg|C for Graphics]] (Cg)
 +
* ARB assembly: assembly-like languages that are accessed through a set of ARB extensions.
 +
* NVIDIA assembly: NVIDIA-specific extensions to the ARB assembly language.
  
This page will (eventually) be expanded to provide you with all the information you need to make your choice based on your application's needs.
+
== OpenGL shading language ==
 +
{{main|OpenGL Shading Language}}
  
== Shading Language Considerations ==
+
The OpenGL Shading Language (GLSL) is a "high-level" language. This means high-level compared to assembly, not high-level the way a language like C# is compared to C. GLSL is structurally very similar to C.
  
Here are some of the points you need to consider when choosing a shading language:
+
=== Advantages ===
  
* Ease of use
+
This is the current standard, as far as OpenGL is concerned. It is the only shading language that is a part of the OpenGL specification.
* Cross-vendor Y/N
 
* Cross-platform Y/N
 
* Run-time efficiency
 
* Compilation time
 
* Feature differences
 
  
For now, this section will be brief, but eventually we'll try to beef this up with more specifics on each of these and how they relate to each shading language.
+
Because of this, it is kept up-to-date with current OpenGL features. Each new version of the base standard usually means a new version number of GLSL as well.
  
* '''Ease of use'''
+
=== Disadvantages ===
*: Choose a high-level language ([[GLSL]] or [[Cg]]).  You really don't want to be coding assembly unless you're trying to wring out every last microsecond of performance on a specific card by a specific vendor.
 
  
* '''Cross-vendor Y/N'''
+
A fully compiled and linked GLSL program ''must'' implement all of the stages within itself. This means that any mixing and matching of vertex and fragment shaders can ''only'' happen before link time. So every possible combination of shaders that you intend to use must be explicitly linked. Combined with the fact that linking is not a fast operation in GLSL, and one finds that generating all of the programs that a user might want to use can take a long time. In highly degenerate cases, this can be tens of minutes.
*: [[GLSL]] and ARB/EXT assembly profiles are explicitly cross-vendor. However, check whether stable support for these languages/extensions has been provided by each vendor you are considering. [[Cg]] is also cross-vendor, but for non-[[nVidia]] GPUs, you drop down to SM2.0 capability.
 
  
* '''Cross-platform'''
+
Resource binding, like which attribute index a particular input variable uses and so forth, must be handed in the OpenGL API rather than in GLSL itself.
*: See previous point.
 
  
* '''Run-time Efficiency'''
+
The complexity of having a full C-style language in a graphics driver causes quite a few driver bugs to show themselves, particularly in ATI compilers.
*: This is one of those things you're just going have to try as it's going to depend on how you use the API.
 
  
* '''Compilation time'''
+
NVIDIA's GLSL compiler is really a slight modification of the Cg compiler. Because of that, some of the differences between GLSL and Cg will occasionally show their heads. Cg is more permissive syntactically than GLSL, so a GLSL program that compiles on an NVIDIA driver may not compile on an ATI driver. It can also give unusual error messages, with references to a "profile" (a concept that exists in Cg but not GLSL).
*: Obviously compiling high-level shading languages isn't going to be instantaneous. If you're going to compile all the shaders you'll ''ever'' need during a run at startup, you probably don't care much about this time (as long as the user doesn't wait too long at startup). But if you want to build/compile shaders at run-time, you might prefer an option such as loading pre-compiled shaders or background shader compilation, and not all these shading languages support that (see [[#Offline Compiler, Binary Format]] below for more on this).
 
  
* '''Feature differences'''
+
== C for graphics ==
*: Hopefully we'll eventually have a good pick-list here.  Things like having an effects framework, switchable compile-time or run-time shader "decision" points, support for bit-packing/unpacking statements, predetermined varying slots, etc.
+
{{main|Cg}}
  
Now for a brief history of shading support in OpenGL to give you some context:
+
Cg is NVIDIA's shading language. It was originally intended for OpenGL, but the ARB rejected it in favor of GLSL.
  
== How to know if [[GLSL]] is supported? ==
+
=== Advantages ===
  
If the the GL version is 2.0, then [[GLSL]] is supported.
+
The syntax is almost identical to Direct3D's HLSL. As such, Cg shaders can be ported without changes to HLSL, if one is doing cross-API development.
  
glhlib can help you with finding the version, it is open source
+
The Cg compiler is a library that is external to the driver. As such, the user can pre-compile shaders into their target form.
  
http://www.geocities.com/vmelkon/glhlibrary.html
+
=== Disadvantages ===
  
  int values[2];  //Major and minor version
+
The Cg compiler is really a cross-compiler. The source code is compiled into an output based on a profile. To use Cg in OpenGL, your output profile must be something that OpenGL can accept. Therefore, Cg's disadvantages depend entirely on what profile you are using.
  glhGetIntegerv(GLH_OPENGL_VERSION, values);
 
  if(values[0] >= 2)
 
  {
 
    cout<<"yes, it is supported";
 
  }
 
  
and if you want the [[GLSL]] version
+
The GLSL profile causes the use of Cg to gain the disadvantages of GLSL, but also the advantages of it. The same goes for the ARB assembly and NVIDIA assembly profiles.
  glhGetIntegerv(GLH_GLSL_VERSION, values);
 
  
There is also
+
Without precompiling, you also gain a increased compile time. First the Cg compiler must generate the output for the profile, then the GLSL or ARB assembly compiler must operate to create the actual shader.
  glhGetIntegerv(GLH_GPU_SHADERMODEL, values);
 
  glhGetIntegerv(GLH_VENDOR, values);
 
  if(values[0] == VENDOR_ATI)
 
  {
 
    cout<<"this is a ATI/AMD";
 
  }
 
  
== Additional Info ==
+
== ARB assembly ==
  
3DLabs implemented a compiler for [[GLSL]]. After all, they were the one who invented [[GLSL]].
+
The ARB_vertex_program and ARB_fragment_program extensions expose a assembly-like shading language. It is not pure assembly, and implementations of it do not have to resemble the language. But it is something close.
  
ATI/AMD used the 3DLabs compiler to implement their own. That is one reason why if you make errors in your [[GLSL]] code, you get error messages that is identical to the 3DLabs compiler.
+
=== Advantages ===
  
nVidia had their [[Cg]] compiler. They preferred that people write [[Cg]] shaders and thus would be available for GL and D3D.
+
Faster compile times.
  
nVidia later on added a [[GLSL]] compiler or should we say tokenizer to their [[Cg]] compiler thus a [[GLSL]] compiler was quickly available on nVidia and it was pretty stable.
+
Intel supports these.
  
The unfortunate side effect is that error messages are different from the 3DLabs compiler.
+
=== Disadvantages ===
  
== Implementation ==
+
ARB assembly predates GLSL version 1.0. As such, ARB assembly is very old. It was designed to work with hardware around the time of the Radeon 9xxx and the GeForce FX 5xxx series. This means, in Direct3D terms, that ARB assembly only provides shader model 2.0-level functionality. It has no support for many common features.
  
The [[GLSL]] compiler is implemented in the driver. 3DLabs has their own, ATI/AMD has their own. nVidia has their own. This was why on one of them you will find bugs but you won't find it on the other.
+
As an assembly-like language, it is more difficult to program in the more complex your programs become.
  
== Offline Compiler, Binary Format ==
+
== NVIDIA assembly ==
  
This is ''currently'' not available in [[GLSL]], but is supported in [[Cg]] (well, the format isn't binary, but is rather ARB/EXT or NV assembly language shaders which load fast).
+
NVIDIA assembly refers to a number of NVIDIA-specific extensions to the ARB assembly language. They make the language congruent with modern GL 3.x features (geometry shaders, etc).
  
Some had suggested that this new addition be made available by ARB/Khronos for [[GLSL]].  That is, compile your [[GLSL]] shader and create a sort of binary blob that would work on all drivers/GPUs.
+
=== Advantages ===
  
Advantages :
+
Compilation speed similar to that of ARB assembly.
  
# A single compiler, less risks of bugs
+
=== Disadvantages ===
# The compiler can be open source so that anyone can fix it
 
# Compiling lots of [[GLSL]] shaders (200 and more) takes a lot of time. In the order of a few seconds to 1 minute. An offline compiler would do all this heavy CPU operation. Your program can quickly load the binary blob.
 
  
Disadvantages :
+
NVIDIA-only.
  
# The binary blob needs to be of some generic format that the compiler in the driver itself might want to optimize for the GPU.
+
== Special considerations ==
# nVidia has created [[Cg]] which can convert [[GLSL]] to ARB and NV assembly shading languages. Why not make just use this since it is available?
 
# These aren't disadvantages. Just road blocks and uncertainties as to what is the best thing to do.
 
  
== Intel, S3 ==
+
=== Intel support ===
  
For Windows, Intel just refuses to implement [[GLSL]] on some of their GPUs like the GMA 900, 950 and others
+
Intel is the sales leader in dedicated graphics hardware for PCs. This hardware is not standalone graphics boards, but low-end graphics chips integrated in Intel's motherboards.
  
The GL version is at GL 1.5
+
Put simply, Intel's driver support is terrible (note: this is true for both Direct3D and OpenGL, though GL gets it worse). They refuse to implement GLSL on any of their D3D9 chips. They have implemented GLSL on their D3D10 parts, but even then, they only implement OpenGL version 2.0 (D3D10 is the functional equivalent of version 3.2 of OpenGL).
  
http://en.wikipedia.org/wiki/Intel_GMA
+
If you are interested in your shader-based programs working on older Intel hardware, the ARB assembly is your best bet, as Intel's OpenGL drivers do support that.
 
 
It is not clear if their more advanced GPUs like the X3000 has support.
 
 
 
Also, keep in mind these aren't gaming GPUs. They are ok for doing Aero effects on Windows Vista.
 
 
 
However, Intel does provide [[ARB_vertex_program]], [[ARB_fragment_program]]. These are the older interfaces which are an ASM like language.
 
 
 
What you can do is code in [[GLSL]], then use the [[Cg]] offline compiler from nVidia to compile it to [[ARB_vertex_program]], [[ARB_fragment_program]] form.
 
 
 
On Mac, Apple implements OpenGL and it is not clear if [[GLSL]] support is hw accelerated on Intel. Need confirmation.
 
 
 
On Linux, the open source drivers implemented [[GLSL]] and it works.
 
 
 
Then there is S3. Little is known about their GL support.
 
 
 
http://en.wikipedia.org/wiki/S3_Graphics
 
 
 
We need to people to report what they support.
 

Revision as of 07:23, 21 January 2010

The various extensions and core revisions of the OpenGL API have led to the availability of a number of different potential shading languages that you may use. This page will deal with

Main options

While there are other possibilities, these are the three primary competitors in the OpenGL space:

  • OpenGL Shading Langauge (GLSL)
  • C for Graphics (Cg)
  • ARB assembly: assembly-like languages that are accessed through a set of ARB extensions.
  • NVIDIA assembly: NVIDIA-specific extensions to the ARB assembly language.

OpenGL shading language

The OpenGL Shading Language (GLSL) is a "high-level" language. This means high-level compared to assembly, not high-level the way a language like C# is compared to C. GLSL is structurally very similar to C.

Advantages

This is the current standard, as far as OpenGL is concerned. It is the only shading language that is a part of the OpenGL specification.

Because of this, it is kept up-to-date with current OpenGL features. Each new version of the base standard usually means a new version number of GLSL as well.

Disadvantages

A fully compiled and linked GLSL program must implement all of the stages within itself. This means that any mixing and matching of vertex and fragment shaders can only happen before link time. So every possible combination of shaders that you intend to use must be explicitly linked. Combined with the fact that linking is not a fast operation in GLSL, and one finds that generating all of the programs that a user might want to use can take a long time. In highly degenerate cases, this can be tens of minutes.

Resource binding, like which attribute index a particular input variable uses and so forth, must be handed in the OpenGL API rather than in GLSL itself.

The complexity of having a full C-style language in a graphics driver causes quite a few driver bugs to show themselves, particularly in ATI compilers.

NVIDIA's GLSL compiler is really a slight modification of the Cg compiler. Because of that, some of the differences between GLSL and Cg will occasionally show their heads. Cg is more permissive syntactically than GLSL, so a GLSL program that compiles on an NVIDIA driver may not compile on an ATI driver. It can also give unusual error messages, with references to a "profile" (a concept that exists in Cg but not GLSL).

C for graphics

Cg is NVIDIA's shading language. It was originally intended for OpenGL, but the ARB rejected it in favor of GLSL.

Advantages

The syntax is almost identical to Direct3D's HLSL. As such, Cg shaders can be ported without changes to HLSL, if one is doing cross-API development.

The Cg compiler is a library that is external to the driver. As such, the user can pre-compile shaders into their target form.

Disadvantages

The Cg compiler is really a cross-compiler. The source code is compiled into an output based on a profile. To use Cg in OpenGL, your output profile must be something that OpenGL can accept. Therefore, Cg's disadvantages depend entirely on what profile you are using.

The GLSL profile causes the use of Cg to gain the disadvantages of GLSL, but also the advantages of it. The same goes for the ARB assembly and NVIDIA assembly profiles.

Without precompiling, you also gain a increased compile time. First the Cg compiler must generate the output for the profile, then the GLSL or ARB assembly compiler must operate to create the actual shader.

ARB assembly

The ARB_vertex_program and ARB_fragment_program extensions expose a assembly-like shading language. It is not pure assembly, and implementations of it do not have to resemble the language. But it is something close.

Advantages

Faster compile times.

Intel supports these.

Disadvantages

ARB assembly predates GLSL version 1.0. As such, ARB assembly is very old. It was designed to work with hardware around the time of the Radeon 9xxx and the GeForce FX 5xxx series. This means, in Direct3D terms, that ARB assembly only provides shader model 2.0-level functionality. It has no support for many common features.

As an assembly-like language, it is more difficult to program in the more complex your programs become.

NVIDIA assembly

NVIDIA assembly refers to a number of NVIDIA-specific extensions to the ARB assembly language. They make the language congruent with modern GL 3.x features (geometry shaders, etc).

Advantages

Compilation speed similar to that of ARB assembly.

Disadvantages

NVIDIA-only.

Special considerations

Intel support

Intel is the sales leader in dedicated graphics hardware for PCs. This hardware is not standalone graphics boards, but low-end graphics chips integrated in Intel's motherboards.

Put simply, Intel's driver support is terrible (note: this is true for both Direct3D and OpenGL, though GL gets it worse). They refuse to implement GLSL on any of their D3D9 chips. They have implemented GLSL on their D3D10 parts, but even then, they only implement OpenGL version 2.0 (D3D10 is the functional equivalent of version 3.2 of OpenGL).

If you are interested in your shader-based programs working on older Intel hardware, the ARB assembly is your best bet, as Intel's OpenGL drivers do support that.