Difference between revisions of "Selecting a Shading Language"

From OpenGL Wiki
Jump to: navigation, search
Line 1: Line 1:
You should use GLSL, the OpenGL Shading Language. This is also called GLSlang by some people.<br>
+
== Overview ==
Currently, OpenGL is at version 2.1 and GLSL 1.20<br>
+
 
GLSL went into the GL core since GL 2.0<br>
+
You should use GLSL, the OpenGL Shading Language. This is also called GLSlang by some people.
<br>
+
 
=== History ===
+
Currently, OpenGL is at version 2.1 and GLSL 1.20
In the beginning ...<br>
+
 
Many years ago, nVidia designed the register combiners (NV_register_combiners) and implemented the technology in their TNT but I'm not sure if they became popular. Later came the TNT and then, Geforce 256. During the Geforce 256 era, nVidia wanted everyone to do bump mapping and various effects possible with RC. This wasn't stricly a shader language but it allowed programmers to setup the fragment end of the graphics pipe. You needed to make API calls to setup the GPU. nVidia later released their nvasm. Programmers wrote in a assembly language and compiled this.<br>
+
GLSL went into the GL core since GL 2.0
ATI came up with their fragment shader (ATI_fragment_shader). This was also a powerful API to setup the GPU. ATI_fragment_shader_text (?) was available on Macs, if I'm not mistaken, which allowed programmers to write a shader. This was for the fragment pipe as well.<br>
+
 
<br>
+
== History ==
NV also invented NV_register_combiners2, NV_texture_shader, NV_texture_shader2, NV_texture_shader3.<br>
+
 
All gave great access to the GPUs features. All these are for setting up the fragment pipe.<br>
+
In the beginning ...
<br>
+
 
ARB_texture_env_combine and fellow extensions also made a appearance, but since this was to be available across GPUs, it was severly limited. All these are for setting up the fragment pipe.<br>
+
Many years ago, nVidia designed the register combiners ([[NV_register_combiners]]) and implemented the technology in their TNT but I'm not sure if they became popular. Later came the TNT and then, Geforce 256. During the Geforce 256 era, nVidia wanted everyone to do bump mapping and various effects possible with RC. This wasn't strictly a shader language but it allowed programmers to setup the fragment end of the graphics pipe. You needed to make API calls to setup the GPU. nVidia later released their nvasm. Programmers wrote in a assembly language and compiled this.
EXT_vertex_weighting was another way to setup the GPU. Finally, an extension for the vertex pipe. Never was popular.<br>
+
 
EXT_vertex_shader came along. Never was popular.<br>
+
ATI came up with their fragment shader (ATI_fragment_shader). This was also a powerful API to setup the GPU. ATI_fragment_shader_text (?) was available on Macs, if I'm not mistaken, which allowed programmers to write a shader. This was for the fragment pipe as well.
<br>
+
 
NV came up with NV_vertex_program. The Geforce 3 was out and people needed a way to program this VS 1.1 part.
+
NV also invented [[NV_register_combiners2]], [[NV_texture_shader]], [[NV_texture_shader2]], [[NV_texture_shader3]].
They also released NV_register_combiners2 and NV_texture_shader.<br>
+
 
<br>
+
All gave great access to the GPUs features. All these are for setting up the fragment pipe.
NV came up with NV_fragment_program.<br>
+
 
<br>
+
[[ARB_texture_env_combine]] and fellow extensions also made a appearance, but since this was to be available across GPUs, it was severely limited. All these are for setting up the fragment pipe.
ARB_vertex_program approved by ARB on June 18, 2002<br>
+
 
An official shading language for the vertex pipe for all IHVs!<br>
+
[[EXT_vertex_weighting]] was another way to setup the GPU. Finally, an extension for the vertex pipe. Never was popular.
This is suitable for VS 1.1 GPUs.<br>
+
 
<br>
+
[[EXT_vertex_shader]] came along. Never was popular.
ARB_fragment_program approved by ARB on September 18, 2002<br>
+
 
An official shading language for the fragment pipe for all IHVs!<br>
+
NV came up with [[NV_vertex_program]]. The Geforce 3 was out and people needed a way to program this VS 1.1 part.
This is suitable for PS 2.0 GPUs.<br>
+
They also released [[NV_register_combiners2]] and [[NV_texture_shader]].
<br>
+
 
ATI releases the Radeon 9700 and soon ships drivers supporting ARB_vp and ARB_fp, the first IHV to support them.<br>
+
NV came up with [[NV_fragment_program]].
<br>
+
 
NV releases NV_vertex_program1_1 and NV_vertex_program2.<br>
+
[[ARB_vertex_program]] approved by ARB on June 18, 2002
NV_vertex_program2 is for VS 3.0<br>
+
 
NV releases NV_fragment_program2, which is for PS 3.0<br>
+
An official shading language for the vertex pipe for all IHVs!
<br>
+
 
In 2001, 3DLabs considers the future of GL. Calling it GL 2.0, they decide on new features and a fresh API. Older parts of the API was to be removed. GLSL was part of the proposal.<br>
+
This is suitable for VS 1.1 GPUs.
The proposal is not accepted.<br>
+
 
There was GL 1.4 at the time. GL 1.5 came out.<br>
+
[[ARB_fragment_program]] approved by ARB on September 18, 2002
During GL 1.5, GLSL becomes accepted. The first high level shading language for GL with a C like syntax.<br>
+
 
ARB_shading_language_100, ARB_shader_objects, ARB_vertex_shader, ARB_fragment_shader define GLSL.<br>
+
An official shading language for the fragment pipe for all IHVs!
Approved by the ARB on June 11, 2003.<br>
+
 
This is GLSL 1.00<br>
+
This is suitable for PS 2.0 GPUs.
<br>
+
 
September 7, 2004. GL 2.0 spec released and GLSL 1.10 becomes core<br>
+
ATI releases the Radeon 9700 and soon ships drivers supporting [[ARB_vertex_program|ARB_vp]] and [[ARB_fragment_program|ARB_fp]], the first IHV to support them.
<br>
+
 
In memory of fixed pipeline. May you rest in peace. I mean, pieces.<br>
+
NV releases [[NV_vertex_program1_1]] and [[NV_vertex_program2]].
<br>
+
 
Note: The above is not necessarily in chrono order.<br>
+
[[NV_vertex_program2]] is for VS 3.0
 +
 
 +
NV releases [[NV_fragment_program2]], which is for PS 3.0
 +
 
 +
In 2001, 3DLabs considers the future of GL. Calling it GL 2.0, they decide on new features and a fresh API. Older parts of the API was to be removed. GLSL was part of the proposal.
 +
 
 +
The proposal is not accepted.
 +
 
 +
There was GL 1.4 at the time. GL 1.5 came out.
 +
 
 +
During GL 1.5, GLSL becomes accepted. The first high level shading language for GL with a C like syntax.
 +
 
 +
ARB_shading_language_100, ARB_shader_objects, ARB_vertex_shader, ARB_fragment_shader define GLSL.
 +
 
 +
Approved by the ARB on June 11, 2003.
 +
 
 +
This is GLSL 1.00
 +
 
 +
September 7, 2004. GL 2.0 spec released and GLSL 1.10 becomes core
 +
 
 +
In memory of fixed pipeline. May you rest in peace. I mean, pieces.
 +
 
 +
Note: The above is not necessarily in chronological order.
 +
 
 +
== How to know if GLSL is supported? ==
 +
 
 +
If the the GL version is 2.0, then GLSL is supported.
 +
 
 +
glhlib can help you with finding the version, it is open source
 +
 
 +
http://www.geocities.com/vmelkon/glhlibrary.html
  
=== How to know if GLSL is supported? ===
 
If the the GL version is 2.0, then GLSL is supported.<br>
 
glhlib can help you with finding the version, it is open source<br>
 
http://www.geocities.com/vmelkon/glhlibrary.html<br>
 
<br>
 
 
   int values[2];  //Major and minor version
 
   int values[2];  //Major and minor version
 
   glhGetIntegerv(GLH_OPENGL_VERSION, values);
 
   glhGetIntegerv(GLH_OPENGL_VERSION, values);
Line 71: Line 96:
 
   }
 
   }
  
=== Additional Info ===
+
== Additional Info ==
3DLabs implemented a compiler for GLSL. After all, they were the one who invented GLSL.<br>
+
 
ATI/AMD used the 3DLabs compiler to implement their own. That is one reason why if you make errors in your GLSL code, you get error messages that is identical to the 3DLabs compiler.<br>
+
3DLabs implemented a compiler for GLSL. After all, they were the one who invented GLSL.
nVidia had their Cg compiler. They prefered that people write Cg shaders and thus would be available for GL and D3D.<br>
+
 
nVidia later on added a GLSL compiler or should we say tokenizer to their Cg compiler thus a GLSL compiler was quickly available on nVidia and it was pretty stable.<br>
+
ATI/AMD used the 3DLabs compiler to implement their own. That is one reason why if you make errors in your GLSL code, you get error messages that is identical to the 3DLabs compiler.
The unfortunate side effect is that error messages are different from the 3DLabs compiler.<br>
+
 
 +
nVidia had their [[Cg]] compiler. They preferred that people write [[Cg]] shaders and thus would be available for GL and D3D.
 +
 
 +
nVidia later on added a GLSL compiler or should we say tokenizer to their [[Cg]] compiler thus a GLSL compiler was quickly available on nVidia and it was pretty stable.
 +
 
 +
The unfortunate side effect is that error messages are different from the 3DLabs compiler.
 +
 
 +
== Implementation ==
  
=== Implementation ===
 
 
The GLSL compiler is implemented in the driver. 3DLabs has their own, ATI/AMD has their own. nVidia has their own. This was why on one of them you will find bugs but you won't find it on the other.
 
The GLSL compiler is implemented in the driver. 3DLabs has their own, ATI/AMD has their own. nVidia has their own. This was why on one of them you will find bugs but you won't find it on the other.
  
=== Offline Compiler, Binary Format ===
+
== Offline Compiler, Binary Format ==
This is not available. Some had suggested that this new addition be made available by ARB/Khronos. Compile your GLSL shader and create a sort of binary blob that would work on all drivers/GPUs.<br>
+
 
Advantages :<br>
+
This is not available. Some had suggested that this new addition be made available by ARB/Khronos. Compile your GLSL shader and create a sort of binary blob that would work on all drivers/GPUs.
1. A single compiler, less risks of bugs<br>
+
 
2. The compiler can be open source so that anyone can fix it<br>
+
Advantages :
3. Compiling lots of GLSL shaders (200 and more) takes a lot of time. In the order of a few seconds to 1 minute. An offline compiler would do all this heavy CPU operation. Your program can quickly load the binary blob.<br>
+
 
<br>
+
1. A single compiler, less risks of bugs
Disadvantages :<br>
+
 
1. The binary blob needs to be of some generic format that the compiler in the driver itself might want to optimize for the GPU.<br>
+
2. The compiler can be open source so that anyone can fix it
2. nVidia has created Cg which can convert GLSL to NV_vertex_program and NV_fragment_program. Why not make just use this since it is available?<br>
+
 
3. These aren't disadvantages. Just road blocks and uncertainties as to what is the best thing to do.<br>
+
3. Compiling lots of GLSL shaders (200 and more) takes a lot of time. In the order of a few seconds to 1 minute. An offline compiler would do all this heavy CPU operation. Your program can quickly load the binary blob.
 +
 
 +
Disadvantages :
 +
 
 +
1. The binary blob needs to be of some generic format that the compiler in the driver itself might want to optimize for the GPU.
 +
 
 +
2. nVidia has created [[Cg]] which can convert GLSL to [[NV_vertex_program]] and [[NV_fragment_program]]. Why not make just use this since it is available?
 +
 
 +
3. These aren't disadvantages. Just road blocks and uncertainties as to what is the best thing to do.
 +
 
 +
== Intel, S3 ==
 +
 
 +
For Windows, Intel just refuses to implement GLSL on some of their GPUs like the GMA 900, 950 and others
 +
 
 +
The GL version is at GL 1.5
 +
 
 +
http://en.wikipedia.org/wiki/Intel_GMA
 +
 
 +
It is not clear if their more advanced GPUs like the X3000 has support.
 +
 
 +
Also, keep in mind these aren't gaming GPUs. They are ok for doing Aero effects on Windows Vista.
 +
 
 +
However, Intel does provide [[ARB_vertex_program]], [[ARB_fragment_program]]. These are the older interfaces which are an ASM like language.
 +
 
 +
What you can do is code in GLSL, then use the [[Cg]] offline compiler from nVidia to compile it to [[ARB_vertex_program]], [[ARB_fragment_program]] form.
 +
 
 +
On Mac, Apple implements OpenGL and it is not clear if GLSL support is hw accelerated on Intel. Need confirmation.
 +
 
 +
On Linux, the open source drivers implemented GLSL and it works.
 +
 
 +
Then there is S3. Little is known about their GL support.
 +
 
 +
http://en.wikipedia.org/wiki/S3_Graphics
  
=== Intel, S3 ===
+
We need to people to report what they support.
For Windows, Intel just refuses to implement GLSL on some of their GPUs like the GMA 900, 950 and others<br>
 
The GL version is at GL 1.5<br>
 
http://en.wikipedia.org/wiki/Intel_GMA<br>
 
It is not clear if their more advanced GPUs like the X3000 has support.<br>
 
Also, keep in mind these aren't gaming GPUs. They are ok for doing Aero effects on Windows Vista.<br>
 
<br>
 
However, Intel does provide GL_ARB_vertex_program, GL_ARB_fragment_program. These are the older interfaces which are an ASM like language.<br>
 
What you can do is code in GLSL, then use the Cg offline compiler from nVidia to compile it to GL_ARB_vertex_program, GL_ARB_fragment_program form.<br>
 
<br>
 
On Mac, Apple implements OpenGL and it is not clear if GLSL support is hw accelerated on Intel. Need confirmation.<br>
 
On Linux, the open source drivers implemented GLSL and it works.<br>
 
<br>
 
Then there is S3. Little is known about their GL support.<br>
 
http://en.wikipedia.org/wiki/S3_Graphics<br>
 
We need to people to report what they support.<br>
 

Revision as of 20:53, 4 April 2009

Overview

You should use GLSL, the OpenGL Shading Language. This is also called GLSlang by some people.

Currently, OpenGL is at version 2.1 and GLSL 1.20

GLSL went into the GL core since GL 2.0

History

In the beginning ...

Many years ago, nVidia designed the register combiners (NV_register_combiners) and implemented the technology in their TNT but I'm not sure if they became popular. Later came the TNT and then, Geforce 256. During the Geforce 256 era, nVidia wanted everyone to do bump mapping and various effects possible with RC. This wasn't strictly a shader language but it allowed programmers to setup the fragment end of the graphics pipe. You needed to make API calls to setup the GPU. nVidia later released their nvasm. Programmers wrote in a assembly language and compiled this.

ATI came up with their fragment shader (ATI_fragment_shader). This was also a powerful API to setup the GPU. ATI_fragment_shader_text (?) was available on Macs, if I'm not mistaken, which allowed programmers to write a shader. This was for the fragment pipe as well.

NV also invented NV_register_combiners2, NV_texture_shader, NV_texture_shader2, NV_texture_shader3.

All gave great access to the GPUs features. All these are for setting up the fragment pipe.

ARB_texture_env_combine and fellow extensions also made a appearance, but since this was to be available across GPUs, it was severely limited. All these are for setting up the fragment pipe.

EXT_vertex_weighting was another way to setup the GPU. Finally, an extension for the vertex pipe. Never was popular.

EXT_vertex_shader came along. Never was popular.

NV came up with NV_vertex_program. The Geforce 3 was out and people needed a way to program this VS 1.1 part. They also released NV_register_combiners2 and NV_texture_shader.

NV came up with NV_fragment_program.

ARB_vertex_program approved by ARB on June 18, 2002

An official shading language for the vertex pipe for all IHVs!

This is suitable for VS 1.1 GPUs.

ARB_fragment_program approved by ARB on September 18, 2002

An official shading language for the fragment pipe for all IHVs!

This is suitable for PS 2.0 GPUs.

ATI releases the Radeon 9700 and soon ships drivers supporting ARB_vp and ARB_fp, the first IHV to support them.

NV releases NV_vertex_program1_1 and NV_vertex_program2.

NV_vertex_program2 is for VS 3.0

NV releases NV_fragment_program2, which is for PS 3.0

In 2001, 3DLabs considers the future of GL. Calling it GL 2.0, they decide on new features and a fresh API. Older parts of the API was to be removed. GLSL was part of the proposal.

The proposal is not accepted.

There was GL 1.4 at the time. GL 1.5 came out.

During GL 1.5, GLSL becomes accepted. The first high level shading language for GL with a C like syntax.

ARB_shading_language_100, ARB_shader_objects, ARB_vertex_shader, ARB_fragment_shader define GLSL.

Approved by the ARB on June 11, 2003.

This is GLSL 1.00

September 7, 2004. GL 2.0 spec released and GLSL 1.10 becomes core

In memory of fixed pipeline. May you rest in peace. I mean, pieces.

Note: The above is not necessarily in chronological order.

How to know if GLSL is supported?

If the the GL version is 2.0, then GLSL is supported.

glhlib can help you with finding the version, it is open source

http://www.geocities.com/vmelkon/glhlibrary.html

 int values[2];  //Major and minor version
 glhGetIntegerv(GLH_OPENGL_VERSION, values);
 if(values[0] >= 2)
 {
    cout<<"yes, it is supported";
 }

and if you want the GLSL version

 glhGetIntegerv(GLH_GLSL_VERSION, values);

There is also

 glhGetIntegerv(GLH_GPU_SHADERMODEL, values);
 glhGetIntegerv(GLH_VENDOR, values);
 if(values[0] == VENDOR_ATI)
 {
   cout<<"this is a ATI/AMD";
 }

Additional Info

3DLabs implemented a compiler for GLSL. After all, they were the one who invented GLSL.

ATI/AMD used the 3DLabs compiler to implement their own. That is one reason why if you make errors in your GLSL code, you get error messages that is identical to the 3DLabs compiler.

nVidia had their Cg compiler. They preferred that people write Cg shaders and thus would be available for GL and D3D.

nVidia later on added a GLSL compiler or should we say tokenizer to their Cg compiler thus a GLSL compiler was quickly available on nVidia and it was pretty stable.

The unfortunate side effect is that error messages are different from the 3DLabs compiler.

Implementation

The GLSL compiler is implemented in the driver. 3DLabs has their own, ATI/AMD has their own. nVidia has their own. This was why on one of them you will find bugs but you won't find it on the other.

Offline Compiler, Binary Format

This is not available. Some had suggested that this new addition be made available by ARB/Khronos. Compile your GLSL shader and create a sort of binary blob that would work on all drivers/GPUs.

Advantages :

1. A single compiler, less risks of bugs

2. The compiler can be open source so that anyone can fix it

3. Compiling lots of GLSL shaders (200 and more) takes a lot of time. In the order of a few seconds to 1 minute. An offline compiler would do all this heavy CPU operation. Your program can quickly load the binary blob.

Disadvantages :

1. The binary blob needs to be of some generic format that the compiler in the driver itself might want to optimize for the GPU.

2. nVidia has created Cg which can convert GLSL to NV_vertex_program and NV_fragment_program. Why not make just use this since it is available?

3. These aren't disadvantages. Just road blocks and uncertainties as to what is the best thing to do.

Intel, S3

For Windows, Intel just refuses to implement GLSL on some of their GPUs like the GMA 900, 950 and others

The GL version is at GL 1.5

http://en.wikipedia.org/wiki/Intel_GMA

It is not clear if their more advanced GPUs like the X3000 has support.

Also, keep in mind these aren't gaming GPUs. They are ok for doing Aero effects on Windows Vista.

However, Intel does provide ARB_vertex_program, ARB_fragment_program. These are the older interfaces which are an ASM like language.

What you can do is code in GLSL, then use the Cg offline compiler from nVidia to compile it to ARB_vertex_program, ARB_fragment_program form.

On Mac, Apple implements OpenGL and it is not clear if GLSL support is hw accelerated on Intel. Need confirmation.

On Linux, the open source drivers implemented GLSL and it works.

Then there is S3. Little is known about their GL support.

http://en.wikipedia.org/wiki/S3_Graphics

We need to people to report what they support.