Transparency Sorting

From OpenGL Wiki
Revision as of 07:56, 12 April 2011 by Alfonse (talk | contribs) (Making this more reasonable.)
Jump to navigation Jump to search

blending can be used to make objects appear transparent. However, blending alone is not enough. There are a number of steps that you must take to make transparency work.

When you draw things with blending turned on, the renderer reads back pixels from the frame buffer, mixes in the new color and puts the pixels back where they came from.

Blending and the Z buffer

First - the bad news. REALLY bad news.

The Z buffer doesn't work as you might hope for transparent polygons.

The problem is that the Z buffer prevents OpenGL from drawing pixels that are behind things that have already been drawn. Generally, that's pretty convenient, but when the thing in front is translucent, you need to see the things that are behind it.

Alpha test

Quite often, transparency is a binary decision. The texels of a texture mapped to the polygon causes parts of it to be completely opaque, and other parts to be completely transparent. The texture's alpha values are used to make "cutout" objects. If you want to draw something complicated like a tree, you probably can't afford a polygon for every single leaf and branch; so you use an alpha texture map and a photo of a tree.

The point is that this polygon may well have no translucent pixels. Pixels are either opaque, or completely transparent. Texels that are opaque have an alpha of 1.0, and texels that are transparent have an alpha of 0.0. You can still use depth buffering with this, but you will need to deal with one fact.

Blending with an alpha of 0 will, with the correct blend modes, cause the destination color to be written. However, if the depth test and depth writes are still on, then the depth buffer will be updated even for pixels where the alpha value from the texture was 0. That's because the fragment is still be written; it's just being written with the color of whatever was there before.

To get around this, you will need to stop the fragment from being written entirely. This can be done with fragment shaders, using the [[GLSL Core Language#Control flow|discard}} command:

#version 330
in vec2 texCoord;
out vec4 outColor;

uniform sampler2D theTexture;

void main()
  vec4 texel = texture(theTexture, texCoord);
  if(texel.a < 0.5)
  outColor = texel;

With this shader, you don't need to change the depth buffer parameters or order you render anything. This will cause any fragment that got a texel alpha of 0.5 to simply not be rendered.

Note that texture filtering is still applied to this. So if there is any kind of GL_LINEAR filtering, the values you get will not always be 1.0 and 0.0, even if those are the alpha values in the texture.

Fixed-function code can use alpha-testing to do the same thing.

glAlphaFunc(GL_GREATER, 0.5);

This will only allow pixels with alpha values greater than 0.5 to write to the color or depth buffers.

Opaque first

The first step, upon which all the other steps depend, is to make sure you draw all your opaque polygons before you draw any translucent ones. This is easy to do in most applications and solves most of the problems. The only thing that remains is when you try to render one translucent polygon behind another.

For many applications there are so few translucent objects that this is "good enough". As long as there is no overlap (from the perspective of the camera), this may simply work out.

Standard translucent

The standard method for dealing with translucent objects is as follows. If the above methods do not work or aren't good enough, then you will have to do this.

This process involves disabling writes to the depth buffer and sorting transparent objects and/or polygons based on distance to the camera.

Depth Sorting

Red overlaps green which overlaps blue which overlaps red.

The next thing that most people consider is to sort the translucent polygons as a function of Z depth.

To be perfect - even sorting them isn't enough. You may have to split polygons up on-the-fly to get *perfect* rendering. Consider the pathalogical case in the image to the right.

There is no way to sort this to make it work without splitting at least one of the polygons into two.

This looks like an unlikely situation - but it's really not.

How to Sort.

Worse still, if you decide to split and sort polygons (or just to sort and hope that the pathalogical overlap case doesn't show up), what key do you sort on? The center of the polygon? The nearest vertex? The furthest?

Look what can happen when a translucent green blob alien (C) stands in front of a window (B)...The observer is standing at (A). Here is a plan view of the two polygons and our eye:

Sort by what.png

In this example, the center of polygon 'B' is closer to the eye than the center of polygon C - but it is behind C! How about sorting by the nearest vertex? Nope - B is still in front. How about by the furthest? Nope - B still comes out "in front". You have to look at the 'span' of C against the 'span' of B...which does bad things to some sort algorithms when you give them the three mutually overlapping polygon example. Some sort algorithms never terminate when given that input because R>G, G>B but B>R !!!

BSP Tree Sorting

Depth peeling



The upshot of this is simply that you can't simply render translucent objects in any order without special consideration. If you have enough translucent surfaces moving around in a sufficiently complex manner, you will find it very hard to avoid errors with acceptable realtime algorithms.

It's largely a matter of what you are prepared to tolerate and what you know a priori about your scene content.