Difference between revisions of "Transparency Sorting"

From OpenGL Wiki
Jump to: navigation, search
(Another Good Trick)
Line 25: Line 25:
 
Fortunately, OpenGL has a function that can prevent pixels with a specified set of alpha values from writing to the colour or Z buffers. For example:
 
Fortunately, OpenGL has a function that can prevent pixels with a specified set of alpha values from writing to the colour or Z buffers. For example:
  
    glAlphaFunc ( GL_GREATER, 0.1 ) ;
+
<source lang="c">
    glEnable ( GL_ALPHA_TEST ) ;
+
glAlphaFunc(GL_GREATER, 0.1);
 +
glEnable(GL_ALPHA_TEST);
 +
</source>
  
 
This will only allow pixels with alpha values greater than 0.1 to write to the colour or Z buffers. You have to use this with care though - bear in mind that if your texture filter is set to one of the LINEAR or MIPMAP modes (eg GL_LINEAR_MIPMAP_LINEAR) then even if the top level texture map contains only 0.0 and 1.0, intermediate values will creep in during the filtering process.
 
This will only allow pixels with alpha values greater than 0.1 to write to the colour or Z buffers. You have to use this with care though - bear in mind that if your texture filter is set to one of the LINEAR or MIPMAP modes (eg GL_LINEAR_MIPMAP_LINEAR) then even if the top level texture map contains only 0.0 and 1.0, intermediate values will creep in during the filtering process.

Revision as of 21:39, 13 February 2011

Alpha blending is the OpenGL term for transparency/translucency processing.

When you draw things with alpha blending turned on, the renderer reads back pixels from the frame buffer, mixes in the new colour and puts the pixels back where they came from. There are several different ways of performing that mixing - and the amount of new and old colour is controlled by the alpha part of the colours.

Alpha blending and the Z buffer

First - the bad news. REALLY bad news.

The Z buffer doesn't work as you might hope for transparent polygons.

The problem is that the Z buffer prevents OpenGL from drawing pixels that are behind things that have already been drawn. Generally, that's pretty convenient - but when the thing in front is translucent, you need to see the things that are behind it.

A First Quick Fix.

The first fix - upon which all the other fixes depend - is to make sure you draw all your opaque polygons before you draw any translucent ones. This is easy to do in most applications and solves most of the problems. The only thing that remains is when you try to render one translucent polygon behind another.

For many applications there are so few translucent objects that this is "good enough".

Another Good Trick

Quite often, alpha-blended polygons are used with textured alpha to make 'cutout' objects. If you want to draw something complicated like a tree, you probably can't afford a polygon for every single leaf and branch - so you use an alpha texture map and a photo of a tree.

The point is that this polygon may well have no partially translucent pixels - there are lots of utterly opaque ones in the middle of the tree - and lots of utterly transparent ones around the outside. In principle, there shouldn't be a problem with Z buffering...but there is - because by default, even the totally transparent pixels will write to the Z buffer.

Fortunately, OpenGL has a function that can prevent pixels with a specified set of alpha values from writing to the colour or Z buffers. For example:

glAlphaFunc(GL_GREATER, 0.1);
glEnable(GL_ALPHA_TEST);

This will only allow pixels with alpha values greater than 0.1 to write to the colour or Z buffers. You have to use this with care though - bear in mind that if your texture filter is set to one of the LINEAR or MIPMAP modes (eg GL_LINEAR_MIPMAP_LINEAR) then even if the top level texture map contains only 0.0 and 1.0, intermediate values will creep in during the filtering process.

However, this is another thing that will reduce the number of problems associated with Z-buffered rendering of alpha-blended polygons.

Disabling Z-write for Translucent Polygons.

This is a technique that many people advocate - unfortunately it doesn't really help. The theory is that if a translucent polygon doesn't write to the Z buffer then subsequent polygons that are written behind it will not be occluded.

If all the translucent polygons have the same colour, then this does actually work - but for normal glBlendFunc settings and polygons of differing colours, the order that polygons are blended into the frame buffer also matters.

Consider two polygons, one red, the other blue - rendered against a green background. Both are 50% transparent. The red one is in front, the blue one is behind, the green background is behind that.

The final colour should be 50% red, 25% green and 25% blue.

Look at the various possible options, and the colour after each rendering step:

  • Scenario 1a: Z buffer enabled, red poly first, blue second.
    1. Green background. (0.0,1.0,0.0)
    2. Render red poly (0.5,0.5,0.0)
    3. Render blue poly (0.5,0.5,0.0) (Z-buffered out)
    WRONG!!
  • Scenario 1b: Z buffer enabled, blue poly first, red second.
    1. Green background. (0.0,1.0,0.0)
    2. Render blue poly (0.0,0.5,0.5)
    3. Render red poly (0.5,0.25,0.25)
    HOORAY!
  • Scenario 2a: Z buffer disabled, red poly first, blue second.
    1. Green background. (0.0,1.0,0.0)
    2. Render red poly (0.5,0.5,0.0)
    3. Render blue poly (0.25,0.25,0.5)
    WRONG!!
  • Scenario 2b: Z buffer disabled, blue poly first, red second.
    1. Green background. (0.0,1.0,0.0)
    2. Render blue poly (0.0,0.5,0.5)
    3. Render red poly (0.5,0.25,0.25)
    HOORAY!

So you see that no matter whether you enable or disable the Z buffer, the colour only comes out right if you render FAR to NEAR - which means that you have to sort your polygons as a function of depth.

There are other algorithms entailing use of "destination alpha" - but they suffer from similar problems.

Depth Sorting

Red overlaps green which overlaps blue which overlaps red.

The next thing that most people consider is to sort the translucent polygons as a function of Z depth.

To be perfect - even sorting them isn't enough. You may have to split polygons up on-the-fly to get *perfect* rendering. Consider the pathalogical case in the image to the right.

There is no way to sort this to make it work without splitting at least one of the polygons into two.

This looks like an unlikely situation - but it's really not.

How to Sort.

Worse still, if you decide to split and sort polygons (or just to sort and hope that the pathalogical overlap case doesn't show up), what key do you sort on? The center of the polygon? The nearest vertex? The furthest?

Look what can happen when a translucent green blob alien (C) stands in front of a window (B)...The observer is standing at (A). Here is a plan view of the two polygons and our eye:

Sort by what.png

In this example, the center of polygon 'B' is closer to the eye than the center of polygon C - but it is behind C! How about sorting by the nearest vertex? Nope - B is still in front. How about by the furthest? Nope - B still comes out "in front". You have to look at the 'span' of C against the 'span' of B...which does bad things to some sort algorithms when you give them the three mutually overlapping polygon example. Some sort algorithms never terminate when given that input because R>G, G>B but B>R !!!

BSP Tree Sorting

Depth peeling

GL_SAMPLE_ALPHA_TO_COVERAGE

Conclusions.

The upshot of this is simply that you can't simply render translucent objects in any order without special consideration. If you have enough translucent surfaces moving around in a sufficiently complex manner, you will find it very hard to avoid errors with acceptable realtime algorithms.

It's largely a matter of what you are prepared to tolerate and what you know a'priori about your scene content.