Difference between revisions of "Selecting a Shading Language"

From OpenGL Wiki
Jump to: navigation, search
(Offline Compiler, Binary Format)
(Shading Language Considerations)
Line 38: Line 38:
  
 
* '''Compilation time'''
 
* '''Compilation time'''
*: Obviously compiling high-level shading languages isn't going to be instantaneous.  If you're going to compile all the shaders you'll ''ever'' need during a run at startup, you probably don't care much about this time (as long as the user doesn't wait too long at startup).  But if you want to build/compile shaders at run-time, you might prefer an option such as loading pre-compiled shaders or background shader compilation, and not all these shading languages support that.
+
*: Obviously compiling high-level shading languages isn't going to be instantaneous.  If you're going to compile all the shaders you'll ''ever'' need during a run at startup, you probably don't care much about this time (as long as the user doesn't wait too long at startup).  But if you want to build/compile shaders at run-time, you might prefer an option such as loading pre-compiled shaders or background shader compilation, and not all these shading languages support that (see [[#Offline Compiler, Binary Format]] below for more on this).
  
 
* '''Feature differences'''
 
* '''Feature differences'''

Revision as of 21:27, 4 April 2009

Overview

Before we address this question, it's worthwhile to enumerate the choices you have:

  • GLSL - aka OpenGL Shading Language (aka GLSlang); a high-level shading language
  • Cg - another high-level shading language, but by NVidia
  • ARB/EXT or vendor-specific assembly profiles - low-level "assembly" shading languages

The OpenGL purist would of course state that you should always use GLSL. However, there are specific needs that might favor any one of these approaches. So it's up to you to chose, but in the absence of any specific needs do favor GLSL. GLSL has been supported in the OpenGL core since OpenGL 2.0.

This page will (eventually) be expanded to provide you with all the information you need to make your choice based on your application's needs.

Shading Language Considerations

Here are some of the points you need to consider when choosing a shading language:

  • Ease of use
  • Cross-vendor Y/N
  • Cross-platform Y/N
  • Run-time efficiency
  • Compilation time
  • Feature differences

For now, this section will be brief, but eventually we'll try to beef this up with more specifics on each of these and how they relate to each shading language.

  • Ease of use
    Choose a high-level language (GLSL or Cg). You really don't want to be coding assembly unless you're trying to wring out every last microsecond of performance on a specific card by a specific vendor.
  • Cross-vendor Y/N
    GLSL and ARB/EXT assembly profiles are explicitly cross-vendor. However, check whether stable support for these languages/extensions has been provided by each vendor you are considering. Cg is also cross-vendor, but for non-nVidia GPUs, you drop down to SM2.0 capability.
  • Cross-platform
    See previous point.
  • Run-time Efficiency
    This is one of those things you're just going have to try as it's going to depend on how you use the API.
  • Compilation time
    Obviously compiling high-level shading languages isn't going to be instantaneous. If you're going to compile all the shaders you'll ever need during a run at startup, you probably don't care much about this time (as long as the user doesn't wait too long at startup). But if you want to build/compile shaders at run-time, you might prefer an option such as loading pre-compiled shaders or background shader compilation, and not all these shading languages support that (see #Offline Compiler, Binary Format below for more on this).
  • Feature differences
    Hopefully we'll eventually have a good pick-list here. Things like having an effects framework, switchable compile-time or run-time shader "decision" points, support for bit-packing/unpacking statements, predetermined varying slots, etc.

Now for a brief history of shading support in OpenGL to give you some context:

History

In the beginning ...

Many years ago, nVidia designed the register combiners (NV_register_combiners) and implemented the technology in their TNT but I'm not sure if they became popular. Later came the TNT and then, Geforce 256. During the Geforce 256 era, nVidia wanted everyone to do bump mapping and various effects possible with RC. This wasn't strictly a shader language but it allowed programmers to setup the fragment end of the graphics pipe. You needed to make API calls to setup the GPU. nVidia later released their nvasm. Programmers wrote in a assembly language and compiled this.

ATI came up with their fragment shader (ATI_fragment_shader). This was also a powerful API to setup the GPU. ATI_fragment_shader_text (?) was available on Macs, if I'm not mistaken, which allowed programmers to write a shader. This was for the fragment pipe as well.

NV also invented NV_register_combiners2, NV_texture_shader, NV_texture_shader2, NV_texture_shader3.

All gave great access to the GPUs features. All these are for setting up the fragment pipe.

ARB_texture_env_combine and fellow extensions also made a appearance, but since this was to be available across GPUs, it was severely limited. All these are for setting up the fragment pipe.

EXT_vertex_weighting was another way to setup the GPU. Finally, an extension for the vertex pipe. Never was popular.

EXT_vertex_shader came along. Never was popular.

NV came up with NV_vertex_program. The Geforce 3 was out and people needed a way to program this VS 1.1 part. They also released NV_register_combiners2 and NV_texture_shader.

NV came up with NV_fragment_program.

ARB_vertex_program approved by ARB on June 18, 2002

An official shading language for the vertex pipe for all IHVs!

This is suitable for VS 1.1 GPUs.

ARB_fragment_program approved by ARB on September 18, 2002

An official shading language for the fragment pipe for all IHVs!

This is suitable for PS 2.0 GPUs.

ATI releases the Radeon 9700 and soon ships drivers supporting ARB_vp and ARB_fp, the first IHV to support them.

NV releases NV_vertex_program1_1 and NV_vertex_program2.

NV_vertex_program2 is for VS 3.0

NV releases NV_fragment_program2, which is for PS 3.0

In 2001, 3DLabs considers the future of GL. Calling it GL 2.0, they decide on new features and a fresh API. Older parts of the API was to be removed. GLSL was part of the proposal.

The proposal is not accepted.

There was GL 1.4 at the time. GL 1.5 came out.

During GL 1.5, GLSL becomes accepted. The first high level shading language for GL with a C like syntax.

ARB_shading_language_100, ARB_shader_objects, ARB_vertex_shader, ARB_fragment_shader define GLSL.

Approved by the ARB on June 11, 2003.

This is GLSL 1.00

September 7, 2004. GL 2.0 spec released and GLSL 1.10 becomes core

In memory of fixed pipeline. May you rest in peace. I mean, pieces.

Note: The above is not necessarily in chronological order.

How to know if GLSL is supported?

If the the GL version is 2.0, then GLSL is supported.

glhlib can help you with finding the version, it is open source

http://www.geocities.com/vmelkon/glhlibrary.html

 int values[2];  //Major and minor version
 glhGetIntegerv(GLH_OPENGL_VERSION, values);
 if(values[0] >= 2)
 {
    cout<<"yes, it is supported";
 }

and if you want the GLSL version

 glhGetIntegerv(GLH_GLSL_VERSION, values);

There is also

 glhGetIntegerv(GLH_GPU_SHADERMODEL, values);
 glhGetIntegerv(GLH_VENDOR, values);
 if(values[0] == VENDOR_ATI)
 {
   cout<<"this is a ATI/AMD";
 }

Additional Info

3DLabs implemented a compiler for GLSL. After all, they were the one who invented GLSL.

ATI/AMD used the 3DLabs compiler to implement their own. That is one reason why if you make errors in your GLSL code, you get error messages that is identical to the 3DLabs compiler.

nVidia had their Cg compiler. They preferred that people write Cg shaders and thus would be available for GL and D3D.

nVidia later on added a GLSL compiler or should we say tokenizer to their Cg compiler thus a GLSL compiler was quickly available on nVidia and it was pretty stable.

The unfortunate side effect is that error messages are different from the 3DLabs compiler.

Implementation

The GLSL compiler is implemented in the driver. 3DLabs has their own, ATI/AMD has their own. nVidia has their own. This was why on one of them you will find bugs but you won't find it on the other.

Offline Compiler, Binary Format

This is currently not available in GLSL, but is supported in Cg.

Some had suggested that this new addition be made available by ARB/Khronos for GLSL. That is, compile your GLSL shader and create a sort of binary blob that would work on all drivers/GPUs.

Advantages :

  1. A single compiler, less risks of bugs
  2. The compiler can be open source so that anyone can fix it
  3. Compiling lots of GLSL shaders (200 and more) takes a lot of time. In the order of a few seconds to 1 minute. An offline compiler would do all this heavy CPU operation. Your program can quickly load the binary blob.

Disadvantages :

  1. The binary blob needs to be of some generic format that the compiler in the driver itself might want to optimize for the GPU.
  2. nVidia has created Cg which can convert GLSL to ARB and NV assembly shading languages. Why not make just use this since it is available?
  3. These aren't disadvantages. Just road blocks and uncertainties as to what is the best thing to do.

Intel, S3

For Windows, Intel just refuses to implement GLSL on some of their GPUs like the GMA 900, 950 and others

The GL version is at GL 1.5

http://en.wikipedia.org/wiki/Intel_GMA

It is not clear if their more advanced GPUs like the X3000 has support.

Also, keep in mind these aren't gaming GPUs. They are ok for doing Aero effects on Windows Vista.

However, Intel does provide ARB_vertex_program, ARB_fragment_program. These are the older interfaces which are an ASM like language.

What you can do is code in GLSL, then use the Cg offline compiler from nVidia to compile it to ARB_vertex_program, ARB_fragment_program form.

On Mac, Apple implements OpenGL and it is not clear if GLSL support is hw accelerated on Intel. Need confirmation.

On Linux, the open source drivers implemented GLSL and it works.

Then there is S3. Little is known about their GL support.

http://en.wikipedia.org/wiki/S3_Graphics

We need to people to report what they support.