Difference between revisions of "Transparency Sorting"

From OpenGL Wiki
Jump to navigation Jump to search
(Needs an enema.)
 
(10 intermediate revisions by 4 users not shown)
Line 1: Line 1:
'''Alpha blending''' is the OpenGL term for transparency/translucency processing.
+
{{cleanup}}
  
When you draw things with alpha blending turned on, the renderer reads back pixels from the frame buffer, mixes in the new colour and puts the pixels back where they came from. There are several different ways of performing that mixing - and the amount of new and old colour is controlled by the '''alpha''' part of the colours.
+
[[Blending|Blending]] can be used to make objects appear transparent. However, blending alone is not enough. There are a number of steps that you must take to make transparency work.
  
== Alpha blending and the Z buffer ==
+
When you draw things with blending turned on, the renderer reads back pixels from the frame buffer, mixes in the new color and puts the pixels back where they came from.
  
First - the bad news. '''REALLY''' bad news.
+
== Alpha test ==
  
The Z buffer doesn't work as you might hope for transparent polygons.
+
In many cases, transparency is a binary decision. The texels of a texture mapped to the polygon causes parts of it to be completely opaque, and other parts to be completely transparent. The texture's alpha values are used to make "cutout" objects. If you want to draw something complicated like a tree, you probably can't afford a polygon for every single leaf and branch; so you use an texture that has pictures of leaves.
  
The problem is that the Z buffer prevents OpenGL from drawing pixels that are behind things that have already been drawn. Generally, that's pretty convenient - but when the thing in front is translucent, you ''need'' to see the things that are behind it.
+
In this case, the leaf texture has no ''translucent'' texels; either the texel is opaque or it is completely transparent. Texels that are opaque have an alpha of 1.0, and texels that are transparent have an alpha of 0.0.
  
=== A First Quick Fix. ===
+
If this is your case, good news: you can still use [[Depth Test]]ing to do depth base sorting. This avoids many of the issues in the below sections, when you need real translucency. But there is one final issue to overcome.
  
The first fix - upon which all the other fixes depend - is to make sure you draw all your opaque polygons before you draw any translucent ones. This is easy to do in most applications and solves most of the problems. The only thing that remains is when you try to render one translucent polygon behind another.
+
If you were doing normal translucency, via an appropriate [[Blending]] mode, an alpha of 0 would cause the destination color to be written to the framebuffer. However, if the [[Depth Test]] and [[Depth Mask|depth writes]] are still on, then the depth buffer will be updated, even for pixels where the alpha value from the texture was 0. That's because the fragment is still be written; it's just being written with the color of whatever was there before.
  
For many applications there are so few translucent objects that this is "good enough".
+
Thus, what you want to do is not [[Blending]]. What you want to do is test the alpha value and prevent the [[Fragment]] from being written entirely. This can be done with [[Fragment Shader]]s, using the [[GLSL Core Language#Control flow|discard]] command:
  
=== Another Good Trick ===
+
<source lang="glsl">
 +
#version 330
 +
in vec2 texCoord;
 +
out vec4 outColor;
  
Quite often, alpha-blended polygons are used with textured alpha to make 'cutout' objects. If you want to draw something complicated like a tree, you probably can't afford a polygon for every single leaf and branch - so you use an alpha texture map and a photo of a tree.
+
uniform sampler2D theTexture;
  
The point is that this polygon may well have no partially translucent pixels - there are lots of utterly opaque ones in the middle of the tree - and lots of utterly transparent ones around the outside. In principle, there shouldn't be a problem with Z buffering...but there is - because by default, even the totally transparent pixels will write to the Z buffer.
+
void main()
 +
{
 +
  vec4 texel = texture(theTexture, texCoord);
 +
  if(texel.a < 0.5)
 +
    discard;
 +
  outColor = texel;
 +
}
 +
</source>
 +
 
 +
With this shader, you don't need to change the depth buffer parameters or order you render anything. This will cause any fragment that got a texel alpha of 0.5 to be culled.
 +
 
 +
Note that [[Texture Filtering|texture filtering]] is still applied to this. So if there is any kind of {{enum|GL_LINEAR}} filtering, the values you get will not always be 1.0 and 0.0, even if those are the alpha values in the texture. That's why the test is set to be less than 0.5. It would not be a good idea to do a floating-point equality test, like {{code|texel.a == 0.0}}.
  
Fortunately, OpenGL has a function that can prevent pixels with a specified set of alpha values from writing to the colour or Z buffers. For example:
+
{{deprecated|section=}}
 +
 
 +
Fixed-function code can use alpha-testing to do the same thing.
  
 
<source lang="c">
 
<source lang="c">
glAlphaFunc(GL_GREATER, 0.1);
+
glAlphaFunc(GL_GREATER, 0.5);
 
glEnable(GL_ALPHA_TEST);
 
glEnable(GL_ALPHA_TEST);
 
</source>
 
</source>
  
This will only allow pixels with alpha values greater than 0.1 to write to the colour or Z buffers. You have to use this with care though - bear in mind that if your texture filter is set to one of the LINEAR or MIPMAP modes (eg GL_LINEAR_MIPMAP_LINEAR) then even if the top level texture map contains only 0.0 and 1.0, intermediate values will creep in during the filtering process.
+
This will only allow pixels with alpha values greater than 0.5 to write to the color or depth buffers.
 
 
However, this is another thing that will reduce the number of problems associated with Z-buffered rendering of alpha-blended polygons.
 
 
 
=== Disabling Z-write for Translucent Polygons. ===
 
 
 
This is a technique that many people advocate - unfortunately it doesn't really help. The theory is that if a translucent polygon doesn't write to the Z buffer then subsequent polygons that are written behind it will not be occluded.
 
  
If all the translucent polygons have the same colour, then this does actually work - but for normal glBlendFunc settings and polygons of differing colours, the order that polygons are blended into the frame buffer also matters.
+
== Translucency and the depth buffer ==
  
Consider two polygons, one red, the other blue - rendered against a green background. Both are 50% transparent. The red one is in front, the blue one is behind, the green background is behind that.
+
If alpha testing is insufficient for your needs, if you need real translucency via [[Blending]], then a major problem arises.
  
The final colour should be 50% red, 25% green and 25% blue.
+
[[Blending]] is done by combining the current fragment's color with the framebuffer color at that position. This only achieves translucency if the framebuffer color at that position represents all object that are behind the one currently being rendered.
  
Look at the various possible options, and the colour after each rendering step:
+
When doing non-translucent rendering, without blending, this kind of ordering is handled via the [[Depth Test]]. That process entirely culls fragments that happen to be behind a previously rendered object, on the assumption that, if two fragments cover the same pixel, only one will contribute to the image.
  
* Scenario 1a: Z buffer enabled, red poly first, blue second.
+
Translucency, by its very nature, breaks that assumption.
*# Green background.  (0.0,1.0,0.0)
 
*# Render red poly    (0.5,0.5,0.0)
 
*# Render blue poly  (0.5,0.5,0.0)  (Z-buffered out)
 
*: WRONG!!
 
  
* Scenario 1b: Z buffer enabled, blue poly first, red second.
+
== Draw opaque objects first ==
*# Green background.  (0.0,1.0,0.0)
 
*# Render blue poly  (0.0,0.5,0.5)
 
*# Render red poly    (0.5,0.25,0.25)
 
*: HOORAY!
 
  
* Scenario 2a: Z buffer disabled, red poly first, blue second.
+
In order to achieve translucency, all opaque objects must be drawn before drawing any translucent ones. This may be easier to do in some codebases than others.
*# Green background.  (0.0,1.0,0.0)
 
*# Render red poly    (0.5,0.5,0.0)
 
*# Render blue poly  (0.25,0.25,0.5)
 
*: WRONG!!
 
  
* Scenario 2b: Z buffer disabled, blue poly first, red second.
+
== Standard translucent ==
*# Green background.  (0.0,1.0,0.0)
 
*# Render blue poly  (0.0,0.5,0.5)
 
*# Render red poly    (0.5,0.25,0.25)
 
*: HOORAY!
 
  
So you see that no matter whether you enable or disable the Z buffer, the colour only comes out right if you render FAR to NEAR - which means that you have to sort your polygons as a function of depth.
+
The standard method for dealing with translucent objects is as follows. If the above methods do not work or aren't good enough, then you will have to do this.
  
There are other algorithms entailing use of "destination alpha" - but they suffer from similar problems.
+
This process involves disabling writes to the depth buffer and sorting transparent objects and/or polygons based on distance to the camera.
  
 
=== Depth Sorting ===
 
=== Depth Sorting ===
Line 104: Line 98:
 
The upshot of this is simply that you can't simply render translucent objects in any order without special consideration.  If you have enough translucent surfaces moving around in a sufficiently complex manner, you will find it very hard to avoid errors with acceptable realtime algorithms.
 
The upshot of this is simply that you can't simply render translucent objects in any order without special consideration.  If you have enough translucent surfaces moving around in a sufficiently complex manner, you will find it very hard to avoid errors with acceptable realtime algorithms.
  
It's largely a matter of what you are prepared to tolerate and what you know a'priori about your scene content.
+
It's largely a matter of what you are prepared to tolerate and what you know a priori about your scene content.

Latest revision as of 02:32, 25 June 2015

Blending can be used to make objects appear transparent. However, blending alone is not enough. There are a number of steps that you must take to make transparency work.

When you draw things with blending turned on, the renderer reads back pixels from the frame buffer, mixes in the new color and puts the pixels back where they came from.

Alpha test

In many cases, transparency is a binary decision. The texels of a texture mapped to the polygon causes parts of it to be completely opaque, and other parts to be completely transparent. The texture's alpha values are used to make "cutout" objects. If you want to draw something complicated like a tree, you probably can't afford a polygon for every single leaf and branch; so you use an texture that has pictures of leaves.

In this case, the leaf texture has no translucent texels; either the texel is opaque or it is completely transparent. Texels that are opaque have an alpha of 1.0, and texels that are transparent have an alpha of 0.0.

If this is your case, good news: you can still use Depth Testing to do depth base sorting. This avoids many of the issues in the below sections, when you need real translucency. But there is one final issue to overcome.

If you were doing normal translucency, via an appropriate Blending mode, an alpha of 0 would cause the destination color to be written to the framebuffer. However, if the Depth Test and depth writes are still on, then the depth buffer will be updated, even for pixels where the alpha value from the texture was 0. That's because the fragment is still be written; it's just being written with the color of whatever was there before.

Thus, what you want to do is not Blending. What you want to do is test the alpha value and prevent the Fragment from being written entirely. This can be done with Fragment Shaders, using the discard command:

#version 330
in vec2 texCoord;
out vec4 outColor;

uniform sampler2D theTexture;

void main()
{
  vec4 texel = texture(theTexture, texCoord);
  if(texel.a < 0.5)
    discard;
  outColor = texel;
}

With this shader, you don't need to change the depth buffer parameters or order you render anything. This will cause any fragment that got a texel alpha of 0.5 to be culled.

Note that texture filtering is still applied to this. So if there is any kind of GL_LINEAR filtering, the values you get will not always be 1.0 and 0.0, even if those are the alpha values in the texture. That's why the test is set to be less than 0.5. It would not be a good idea to do a floating-point equality test, like {{{1}}}.

Fixed-function code can use alpha-testing to do the same thing.

glAlphaFunc(GL_GREATER, 0.5);
glEnable(GL_ALPHA_TEST);

This will only allow pixels with alpha values greater than 0.5 to write to the color or depth buffers.

Translucency and the depth buffer

If alpha testing is insufficient for your needs, if you need real translucency via Blending, then a major problem arises.

Blending is done by combining the current fragment's color with the framebuffer color at that position. This only achieves translucency if the framebuffer color at that position represents all object that are behind the one currently being rendered.

When doing non-translucent rendering, without blending, this kind of ordering is handled via the Depth Test. That process entirely culls fragments that happen to be behind a previously rendered object, on the assumption that, if two fragments cover the same pixel, only one will contribute to the image.

Translucency, by its very nature, breaks that assumption.

Draw opaque objects first

In order to achieve translucency, all opaque objects must be drawn before drawing any translucent ones. This may be easier to do in some codebases than others.

Standard translucent

The standard method for dealing with translucent objects is as follows. If the above methods do not work or aren't good enough, then you will have to do this.

This process involves disabling writes to the depth buffer and sorting transparent objects and/or polygons based on distance to the camera.

Depth Sorting

Red overlaps green which overlaps blue which overlaps red.

The next thing that most people consider is to sort the translucent polygons as a function of Z depth.

To be perfect - even sorting them isn't enough. You may have to split polygons up on-the-fly to get *perfect* rendering. Consider the pathalogical case in the image to the right.

There is no way to sort this to make it work without splitting at least one of the polygons into two.

This looks like an unlikely situation - but it's really not.

How to Sort.

Worse still, if you decide to split and sort polygons (or just to sort and hope that the pathalogical overlap case doesn't show up), what key do you sort on? The center of the polygon? The nearest vertex? The furthest?

Look what can happen when a translucent green blob alien (C) stands in front of a window (B)...The observer is standing at (A). Here is a plan view of the two polygons and our eye:

Sort by what.png

In this example, the center of polygon 'B' is closer to the eye than the center of polygon C - but it is behind C! How about sorting by the nearest vertex? Nope - B is still in front. How about by the furthest? Nope - B still comes out "in front". You have to look at the 'span' of C against the 'span' of B...which does bad things to some sort algorithms when you give them the three mutually overlapping polygon example. Some sort algorithms never terminate when given that input because R>G, G>B but B>R !!!

BSP Tree Sorting

Depth peeling

GL_SAMPLE_ALPHA_TO_COVERAGE

Conclusions.

The upshot of this is simply that you can't simply render translucent objects in any order without special consideration. If you have enough translucent surfaces moving around in a sufficiently complex manner, you will find it very hard to avoid errors with acceptable realtime algorithms.

It's largely a matter of what you are prepared to tolerate and what you know a priori about your scene content.