https://www.khronos.org/opengl/wiki_opengl/api.php?action=feedcontributions&user=Marco&feedformat=atomOpenGL Wiki - User contributions [en]2020-09-21T16:39:34ZUser contributionsMediaWiki 1.31.6https://www.khronos.org/opengl/wiki_opengl/index.php?title=Talk:General_OpenGL&diff=1568Talk:General OpenGL2006-05-01T11:48:33Z<p>Marco: </p>
<hr />
<div>I think the layout of this page should be changed to be more readable. [[User:Marco|marco]] 11:31, 30 April 2006 (EDT)<br />
<br />
There are many outdated or now wrong articles in this section. To change them all will be a big task. --[[User:Marco|marco]] 07:48, 1 May 2006 (EDT)</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Clipping,_Culling,_and_Visibility_Testing&diff=1567FAQ/Clipping, Culling, and Visibility Testing2006-05-01T11:43:08Z<p>Marco: change layout</p>
<hr />
<div>; [[Clipping]] : How do I tell if a vertex has been clipped or not?<br />
: When an OpenGL primitive moves placing one vertex outside the window, suddenly the color or texture mapping is incorrect. What's going on?<br />
: I know my geometry is inside the view volume. How can I turn off OpenGL's view-volume clipping to maximize performance?<br />
:When I move the viewpoint close to an object, it starts to disappear. How can I disable OpenGL's zNear clipping plane?<br />
; [[Occlusion Query]] : How do I perform occlusion or visibility testing?<br />
; [[Stencil Mask]] : How do I render to a nonrectangular viewport?<br />
; [[Raster Position And Clipping]] : How do I draw glBitmap() or glDrawPixels() primitives that have an initial glRasterPos() outside the window's left or bottom edge?<br />
; [[Scissor Test And Framebuffer Clearing]] : Why doesn't glClear() work for areas outside the scissor rectangle?<br />
; [[Culling]] : How does face culling work? Why doesn't it use the surface normal?<br />
<br />
[[Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Face_Culling&diff=1566Face Culling2006-05-01T11:42:37Z<p>Marco: </p>
<hr />
<div>== How does face culling work? Why doesn't it use the surface normal? ==<br />
<br />
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .<br />
<br />
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.<br />
<br />
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension results in faster rendering, because it culls faces at an earlier stage of the rendering pipeline.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Scissor_Test_And_Framebuffer_Clearing&diff=1565Scissor Test And Framebuffer Clearing2006-05-01T11:41:18Z<p>Marco: </p>
<hr />
<div>== Why doesn't glClear() work for areas outside the scissor rectangle? ==<br />
<br />
The OpenGL Specification states that glClear() only clears the scissor rectangle when the scissor test is enabled. If you want to clear the entire window, use the code:<br />
<br />
<pre> glDisable (GL_SCISSOR_TEST);<br />
glClear (...);<br />
glEnable (GL_SCISSOR_TEST);</pre></div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Raster_Position_And_Clipping&diff=1564Raster Position And Clipping2006-05-01T11:38:38Z<p>Marco: </p>
<hr />
<div>== How do I draw glBitmap() or glDrawPixels() primitives that have an initial glRasterPos() outside the window's left or bottom edge? ===<br />
<br />
When the raster position is set outside the window, it's often outside the view volume and subsequently marked as invalid. Rendering the glBitmap and glDrawPixels primitives won't occur with an invalid raster position. Because glBitmap/glDrawPixels produce pixels up and to the right of the raster position, it appears impossible to render this type of primitive clipped by the left and/or bottom edges of the window.<br />
<br />
However, here's an often-used trick: Set the raster position to a valid value inside the view volume. Then make the following call:<br />
<br />
<pre> glBitmap (0, 0, 0, 0, xMove, yMove, NULL);</pre><br />
<br />
This tells OpenGL to render a no-op bitmap, but move the current raster position by (xMove,yMove). Your application will supply (xMove,yMove) values that place the raster position outside the view volume. Follow this call with the glBitmap() or glDrawPixels() to do the rendering you desire.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Clipping_FAQ&diff=1563Clipping FAQ2006-05-01T11:36:46Z<p>Marco: </p>
<hr />
<div>== How do I tell if a vertex has been clipped or not? ==<br />
<br />
You can use the OpenGL Feedback feature to determine if a vertex will be clipped or not. After you're in Feedback mode, simply send the vertex in question as a GL_POINTS primitive. Then switch back to GL_RENDER mode and check the size of the Feedback buffer. A size of zero indicates a clipped vertex.<br />
<br />
Typically, OpenGL implementations don't provide a fast feedback mechanism. It might be faster to perform the clip test manually. To do so, construct six plane equations corresponding to the clip-coordinate view volume and transform them into object space by the current ModelView matrix. A point is clipped if it violates any of the six plane equations.<br />
<br />
Here's a [http://www.opengl.org/resources/faq/technical/viewcull.c GLUT example] that shows how to calculate the object-space view-volume planes and clip test bounding boxes against them.<br />
<br />
Here is a tutorial titled [http://www.markmorley.com/opengl/frustumculling.html Frustum Culling in OpenGL].<br />
<br />
== When an OpenGL primitive moves placing one vertex outside the window, suddenly the color or texture mapping is incorrect. What's going on? ==<br />
<br />
There are two potential causes for this.<br />
<br />
When a primitive lies partially outside the window, it often crosses the view volume boundary. OpenGL must clip any primitive that crosses the view volume boundary. To clip a primitive, OpenGL must interpolate the color values, so they're correct at the new clip vertex. This interpolation is perspective correct. However, when a primitive is rasterized, the color values are often generated using linear interpolation in window space, which isn't perspective correct. The difference in generated color values means that for any given barycentric coordinate location on a filled primitive, the color values may be different depending on whether the primitive is clipped. If the color values generated during rasterization were perspective correct, this problem wouldn't exist.<br />
<br />
For some OpenGL implementations, texture coordinates generated during rasterization aren't perspective correct. However, you can usually make them perspective correct by calling glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);. Colors generated at the rasterization stage aren't perspective correct in almost every OpenGL implementation, and can't be made so. For this reason, you're more likely to encounter this problem with colors than texture coordinates.<br />
<br />
A second reason the color or texture mapping might be incorrect for a clipped primitive is because the color values or texture coordinates are nonplanar. Color values are nonplanar when the three color components at each vertex don't lie in a plane in 3D color space. 2D texture coordinates are always planar. However, in this context, the term nonplanar is used for texture coordinates that look up a texel area that isn't congruent in shape to the primitive being textured.<br />
<br />
Nonplanar colors or texture coordinates aren't a problem for triangular primitives, but the problem may occur with GL_QUADS, GL_QUAD_STRIP and GL_POLYGON primitives. When using nonplanar color values or texture coordinates, there isn't a correct way to generate new values associated with clipped vertices. Even perspective-correct interpolation can create differences between clipped and nonclipped primitives. The solution to this problem is to not use nonplanar color values and texture coordinates.<br />
<br />
== I know my geometry is inside the view volume. How can I turn off OpenGL's view-volume clipping to maximize performance? ==<br />
<br />
Standard OpenGL doesn't provide a mechanism to disable the view-volume clipping test; thus, it will occur for every primitive you send.<br />
<br />
Some implementations of OpenGL support the GL_EXT_clip_volume_hint extension. If the extension is available, a call to glHint(GL_CLIP_VOLUME_CLIPPING_HINT_EXT,GL_FASTEST) will inform OpenGL that the geometry is entirely within the view volume and that view-volume clipping is unnecessary. Normal clipping can be resumed by setting this hint to GL_DONT_CARE. When clipping is disabled with this hint, results are undefined if geometry actually falls outside the view volume.<br />
<br />
== When I move the viewpoint close to an object, it starts to disappear. How can I disable OpenGL's zNear clipping plane? ==<br />
<br />
You can't. If you think about it, it makes sense: What if the viewpoint is in the middle of a scene? Certainly some geometry is behind the viewer and needs to be clipped. Rendering it will produce undesirable results.<br />
<br />
For correct perspective and depth buffer calculations to occur, setting the zNear clipping plane to 0.0 is also not an option. The zNear clipping plane must be set at a positive (nonzero) distance in front of the eye.<br />
<br />
To avoid the clipping artifacts that can otherwise occur, an application must track the viewpoint location within the scene, and ensure it doesn't get too close to any geometry. You can usually do this with a simple form of collision detection. This FAQ contains more [http://www.opengl.org/resources/faq/technical/miscellaneous.htm#misc0110 information on collision detection] with OpenGL.<br />
<br />
If you're certain that your geometry doesn't intersect any of the view-volume planes, you might be able to use an extension to disable clipping. See the previous question for more information.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Clipping_FAQ&diff=1562Clipping FAQ2006-05-01T11:36:10Z<p>Marco: </p>
<hr />
<div>== How do I tell if a vertex has been clipped or not? ==<br />
<br />
You can use the OpenGL Feedback feature to determine if a vertex will be clipped or not. After you're in Feedback mode, simply send the vertex in question as a GL_POINTS primitive. Then switch back to GL_RENDER mode and check the size of the Feedback buffer. A size of zero indicates a clipped vertex.<br />
<br />
Typically, OpenGL implementations don't provide a fast feedback mechanism. It might be faster to perform the clip test manually. To do so, construct six plane equations corresponding to the clip-coordinate view volume and transform them into object space by the current ModelView matrix. A point is clipped if it violates any of the six plane equations.<br />
<br />
Here's a [http://www.opengl.org/resources/faq/technical/viewcull.c GLUT example] that shows how to calculate the object-space view-volume planes and clip test bounding boxes against them.<br />
<br />
Here is a tutorial titled [http://www.markmorley.com/opengl/frustumculling.html Frustum Culling in OpenGL].<br />
<br />
== When an OpenGL primitive moves placing one vertex outside the window, suddenly the color or texture mapping is incorrect. What's going on? ==<br />
<br />
There are two potential causes for this.<br />
<br />
When a primitive lies partially outside the window, it often crosses the view volume boundary. OpenGL must clip any primitive that crosses the view volume boundary. To clip a primitive, OpenGL must interpolate the color values, so they're correct at the new clip vertex. This interpolation is perspective correct. However, when a primitive is rasterized, the color values are often generated using linear interpolation in window space, which isn't perspective correct. The difference in generated color values means that for any given barycentric coordinate location on a filled primitive, the color values may be different depending on whether the primitive is clipped. If the color values generated during rasterization were perspective correct, this problem wouldn't exist.<br />
<br />
For some OpenGL implementations, texture coordinates generated during rasterization aren't perspective correct. However, you can usually make them perspective correct by calling glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);. Colors generated at the rasterization stage aren't perspective correct in almost every OpenGL implementation, and can't be made so. For this reason, you're more likely to encounter this problem with colors than texture coordinates.<br />
<br />
A second reason the color or texture mapping might be incorrect for a clipped primitive is because the color values or texture coordinates are nonplanar. Color values are nonplanar when the three color components at each vertex don't lie in a plane in 3D color space. 2D texture coordinates are always planar. However, in this context, the term nonplanar is used for texture coordinates that look up a texel area that isn't congruent in shape to the primitive being textured.<br />
<br />
Nonplanar colors or texture coordinates aren't a problem for triangular primitives, but the problem may occur with GL_QUADS, GL_QUAD_STRIP and GL_POLYGON primitives. When using nonplanar color values or texture coordinates, there isn't a correct way to generate new values associated with clipped vertices. Even perspective-correct interpolation can create differences between clipped and nonclipped primitives. The solution to this problem is to not use nonplanar color values and texture coordinates.<br />
<br />
== I know my geometry is inside the view volume. How can I turn off OpenGL's view-volume clipping to maximize performance? ==<br />
<br />
Standard OpenGL doesn't provide a mechanism to disable the view-volume clipping test; thus, it will occur for every primitive you send.<br />
<br />
Some implementations of OpenGL support the GL_EXT_clip_volume_hint extension. If the extension is available, a call to glHint(GL_CLIP_VOLUME_CLIPPING_HINT_EXT,GL_FASTEST) will inform OpenGL that the geometry is entirely within the view volume and that view-volume clipping is unnecessary. Normal clipping can be resumed by setting this hint to GL_DONT_CARE. When clipping is disabled with this hint, results are undefined if geometry actually falls outside the view volume.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Clipping_FAQ&diff=1561Clipping FAQ2006-05-01T11:33:59Z<p>Marco: </p>
<hr />
<div>== How do I tell if a vertex has been clipped or not? ==<br />
<br />
You can use the OpenGL Feedback feature to determine if a vertex will be clipped or not. After you're in Feedback mode, simply send the vertex in question as a GL_POINTS primitive. Then switch back to GL_RENDER mode and check the size of the Feedback buffer. A size of zero indicates a clipped vertex.<br />
<br />
Typically, OpenGL implementations don't provide a fast feedback mechanism. It might be faster to perform the clip test manually. To do so, construct six plane equations corresponding to the clip-coordinate view volume and transform them into object space by the current ModelView matrix. A point is clipped if it violates any of the six plane equations.<br />
<br />
Here's a [http://www.opengl.org/resources/faq/technical/viewcull.c GLUT example] that shows how to calculate the object-space view-volume planes and clip test bounding boxes against them.<br />
<br />
Here is a tutorial titled [http://www.markmorley.com/opengl/frustumculling.html Frustum Culling in OpenGL].<br />
<br />
== When an OpenGL primitive moves placing one vertex outside the window, suddenly the color or texture mapping is incorrect. What's going on? ==<br />
<br />
There are two potential causes for this.<br />
<br />
When a primitive lies partially outside the window, it often crosses the view volume boundary. OpenGL must clip any primitive that crosses the view volume boundary. To clip a primitive, OpenGL must interpolate the color values, so they're correct at the new clip vertex. This interpolation is perspective correct. However, when a primitive is rasterized, the color values are often generated using linear interpolation in window space, which isn't perspective correct. The difference in generated color values means that for any given barycentric coordinate location on a filled primitive, the color values may be different depending on whether the primitive is clipped. If the color values generated during rasterization were perspective correct, this problem wouldn't exist.<br />
<br />
For some OpenGL implementations, texture coordinates generated during rasterization aren't perspective correct. However, you can usually make them perspective correct by calling glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);. Colors generated at the rasterization stage aren't perspective correct in almost every OpenGL implementation, and can't be made so. For this reason, you're more likely to encounter this problem with colors than texture coordinates.<br />
<br />
A second reason the color or texture mapping might be incorrect for a clipped primitive is because the color values or texture coordinates are nonplanar. Color values are nonplanar when the three color components at each vertex don't lie in a plane in 3D color space. 2D texture coordinates are always planar. However, in this context, the term nonplanar is used for texture coordinates that look up a texel area that isn't congruent in shape to the primitive being textured.<br />
<br />
Nonplanar colors or texture coordinates aren't a problem for triangular primitives, but the problem may occur with GL_QUADS, GL_QUAD_STRIP and GL_POLYGON primitives. When using nonplanar color values or texture coordinates, there isn't a correct way to generate new values associated with clipped vertices. Even perspective-correct interpolation can create differences between clipped and nonclipped primitives. The solution to this problem is to not use nonplanar color values and texture coordinates.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Stencil_Mask&diff=1560FAQ/Stencil Mask2006-05-01T11:30:41Z<p>Marco: </p>
<hr />
<div>== How do I render to a nonrectangular viewport? ==<br />
<br />
OpenGL's stencil buffer can be used to mask the area outside of a non-rectangular viewport. With stencil enabled and stencil test appropriately set, rendering can then occur in the unmasked area. Typically an application will write the stencil mask once, and then render repeated frames into the unmasked area.<br />
<br />
As with the depth buffer, an application must ask for a stencil buffer when the window and context are created.<br />
<br />
An application will perform such a rendering as follows:<br />
<br />
<pre> /* Enable stencil test and leave it enabled throughout */<br />
glEnable (GL_STENCIL_TEST);<br />
<br />
/* Prepare to write a single bit into the stencil buffer in the area outside the viewport */<br />
glStencilFunc (GL_ALWAYS, 0x1, 0x1);<br />
<br />
/* Render a set of geometry corresponding to the area outside the viewport here */<br />
<br />
/* The stencil buffer now has a single bit painted in the area outside the viewport */<br />
<br />
/* Prepare to render the scene in the viewport */<br />
glStencilFunc (GL_EQUAL, 0x0, 0x1);<br />
<br />
/* Render the scene inside the viewport here */<br />
<br />
/* ...render the scene again as needed for animation purposes */</pre><br />
<br />
After a single bit is painted in the area outside the viewport, an application may render geometry to either the area inside or outside the viewport. To render to the inside area, use glStencilFunc(GL_EQUAL,0x0,0x1), as the code above shows. To render to the area outside the viewport, use glStencilFunc(GL_EQUAL,0x1,0x1).<br />
<br />
You can obtain similar results using only the depth test. After rendering a 3D scene to a rectangular viewport, an app can clear the depth buffer and render the nonrectangular frame.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Occlusion_Query&diff=1559Occlusion Query2006-05-01T11:27:10Z<p>Marco: </p>
<hr />
<div>== How do I perform occlusion or visibility testing? ==<br />
<br />
This is wrong!<br />
<br />
OpenGL provides no direct support for determining whether a given primitive will be visible in a scene for a given viewpoint. At worst, an application will need to perform these tests manually. The previous question contains information on how to do this.<br />
<br />
The code example from question 10.010 was combined with Nate Robins' excellent viewing tutorial to produce this [http://lynx.inertiagames.com/~michael/OPENGLTUTORS.zip view culling example code].<br />
<br />
Higher-level APIs, such as Fahernheit Large Model, may provide this feature.<br />
<br />
HP OpenGL platforms support an Occlusion Culling extension. To use this extension, enable the occlusion test, render some bounding geometry, and call glGetBooleanv() to obtain the visibility status of the geometry.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Clipping_FAQ&diff=1558Clipping FAQ2006-05-01T11:23:58Z<p>Marco: </p>
<hr />
<div>== How do I tell if a vertex has been clipped or not? ==<br />
<br />
You can use the OpenGL Feedback feature to determine if a vertex will be clipped or not. After you're in Feedback mode, simply send the vertex in question as a GL_POINTS primitive. Then switch back to GL_RENDER mode and check the size of the Feedback buffer. A size of zero indicates a clipped vertex.<br />
<br />
Typically, OpenGL implementations don't provide a fast feedback mechanism. It might be faster to perform the clip test manually. To do so, construct six plane equations corresponding to the clip-coordinate view volume and transform them into object space by the current ModelView matrix. A point is clipped if it violates any of the six plane equations.<br />
<br />
Here's a [http://www.opengl.org/resources/faq/technical/viewcull.c GLUT example] that shows how to calculate the object-space view-volume planes and clip test bounding boxes against them.<br />
<br />
Here is a tutorial titled [http://www.markmorley.com/opengl/frustumculling.html Frustum Culling in OpenGL].</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Depth_Buffer&diff=1557FAQ/Depth Buffer2006-04-30T19:00:00Z<p>Marco: </p>
<hr />
<div>; [[Enable Depth Buffering]] : How do I make depth buffering work? <br />
; [[Depth Buffer and Perspective Rendering]] : Depth buffering doesn't work in my perspective rendering. What's going on?<br />
; [[Write Image To Depth Buffer]] : How do I write a previously stored depth image to the depth buffer?<br />
; [[Depth Buffer Precision]] : Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on?<br />
: Why is my depth buffer precision so poor?<br />
: Why is there more precision at the front of the depth buffer?<br />
: There is no way that a standard-sized depth buffer will have enough precision for my astronomically large scene. What are my options?<br />
; How do I turn off the zNear clipping plane? : See [http://www.opengl.org/resources/faq/technical/clipping.htm#0050 this question] in the Clipping section.<br />
<br />
<br />
<br />
[[Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Depth_Buffer&diff=1556FAQ/Depth Buffer2006-04-30T18:59:33Z<p>Marco: </p>
<hr />
<div>; [[Enable Depth Buffering]] : How do I make depth buffering work? <br />
; [[Depth Buffer and Perspective Rendering]] : Depth buffering doesn't work in my perspective rendering. What's going on?<br />
; [[Write Image To Depth Buffer]] : How do I write a previously stored depth image to the depth buffer?<br />
; [[Depth Buffer Precision]] : Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on?<br />
: Why is my depth buffer precision so poor?<br />
: Why is there more precision at the front of the depth buffer?<br />
; How do I turn off the zNear clipping plane? : See [http://www.opengl.org/resources/faq/technical/clipping.htm#0050 this question] in the Clipping section.<br />
: There is no way that a standard-sized depth buffer will have enough precision for my astronomically large scene. What are my options?<br />
<br />
<br />
[[Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Depth_Buffer_Precision&diff=1555Depth Buffer Precision2006-04-30T18:58:51Z<p>Marco: </p>
<hr />
<div>== Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on? ==<br />
<br />
You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically. Moving the zFar clipping plane further away from the eye always has a negative impact on depth buffer precision, but it's not one as dramatic as moving the zNear clipping plane.<br />
<br />
The OpenGL Reference Manual description for glFrustum() relates depth precision to the zNear and zFar clipping planes by saying that roughly log2(zFar/zNear) bits of precision are lost. Clearly, as zNear approaches zero, this equation approaches infinity.<br />
<br />
While the blue book description is good at pointing out the relationship, it's somewhat inaccurate. As the ratio (zFar/zNear) increases, less precision is available near the back of the depth buffer and more precision is available close to the front of the depth buffer. So primitives are more likely to interact in Z if they are further from the viewer.<br />
<br />
It's possible that you simply don't have enough precision in your depth buffer to render your scene. See the last question in this section for more info.<br />
<br />
It's also possible that you are drawing coplanar primitives. Round-off errors or differences in rasterization typically create "Z fighting" for coplanar primitives. Here are some [[Drawing Lines over Polygons]].<br />
<br />
== Why is my depth buffer precision so poor? ==<br />
<br />
The depth buffer precision in eye coordinates is strongly affected by the ratio of zFar to zNear, the zFar clipping plane, and how far an object is from the zNear clipping plane.<br />
<br />
You need to do whatever you can to push the zNear clipping plane out and pull the zFar plane in as much as possible.<br />
<br />
To be more specific, consider the transformation of depth from eye coordinates<br />
<br />
x<sub>e</sub>, y<sub>e</sub>, z<sub>e</sub>, w<sub>e</sub><br />
<br />
to window coordinates<br />
<br />
x<sub>w</sub>, y<sub>w</sub>, z<sub>w</sub><br />
<br />
with a perspective projection matrix specified by<br />
<br />
glFrustum(l, r, b, t, n, f);<br />
<br />
and assume the default viewport transform. The clip coordinates of zc and wc are<br />
<br />
z<sub>c</sub> = -z<sub>e</sub>* (f+n)/(f-n) - w<sub>e</sub>* 2*f*n/(f-n)<br />
w<sub>c</sub> = -z<sub>e</sub><br />
<br />
Why the negations? OpenGL wants to present to the programmer a right-handed coordinate system before projection and left-handed coordinate system after projection.<br />
<br />
and the ndc coordinate:<br />
<br />
z<sub>ndc</sub> =&nbsp;z<sub>c</sub> / w<sub>c</sub> = [ -z<sub>e</sub> * (f+n)/(f-n) - w<sub>e</sub> * 2*f*n/(f-n) ] / -z<sub>e</sub><br />
= (f+n)/(f-n) + (w<sub>e</sub> / z<sub>e</sub>) * 2*f*n/(f-n)<br />
<br />
The viewport transformation scales and offsets by the depth range (Assume it to be [0, 1]) and then scales by s = (2n-1) where n is the bit depth of the depth buffer:<br />
<br />
z<sub>w</sub> = s * [ (w<sub>e</sub> / z<sub>e</sub>) * f*n/(f-n) + 0.5 * (f+n)/(f-n) + 0.5 ]<br />
<br />
Let's rearrange this equation to express ze / we as a function of zw<br />
<br />
z<sub>e</sub> / w<sub>e</sub> = f*n/(f-n) / ((z<sub>w</sub> / s) - 0.5 * (f+n)/(f-n) - 0.5)<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - 0.5 * (f+n) - 0.5 * (f-n))<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - f) [*]<br />
<br />
Now let's look at two points, the zNear clipping plane and the zFar clipping plane:<br />
<br />
z<sub>w</sub> = 0&nbsp;=&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (-f) = -n<br />
z<sub>w</sub> = s =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((f-n) - f) = -f<br />
<br />
In a fixed-point depth buffer, zw is quantized to integers. The next representable z buffer depth away from the clip planes are 1 and s-1:<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((1/s) * (f-n) - f)<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (((s-1)/s) * (f-n) - f)<br />
<br />
Now let's plug in some numbers, for example, n = 0.01, f = 1000 and s = 65535 (i.e., a 16-bit depth buffer)<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -0.01000015<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -395.90054<br />
<br />
Think about this last line. Everything at eye coordinate depths from -395.9 to -1000 has to map into either 65534 or 65535 in the z buffer. Almost two thirds of the distance between the zNear and zFar clipping planes will have one of two z-buffer values!<br />
<br />
To further analyze the z-buffer resolution, let's take the derivative of [*] with respect to zw<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * n * (f-n) * (1/s) / ((z<sub>w</sub> / s) * (f-n) - f)<sup>2</sup><br />
<br />
Now evaluate it at zw = s<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * (f-n) * (1/s) / n<br />
= - f * (f/n-1) / s [**]<br />
<br />
If you want your depth buffer to be useful near the zFar clipping plane, you need to keep this value to less than the size of your objects in eye space (for most practical uses, world space).<br />
<br />
== Why is there more precision at the front of the depth buffer? ==<br />
<br />
After the projection matrix transforms the clip coordinates, the XYZ-vertex values are divided by their clip coordinate W value, which results in normalized device coordinates. This step is known as the perspective divide. The clip coordinate W value represents the distance from the eye. As the distance from the eye increases, 1/W approaches 0. Therefore, X/W and Y/W also approach zero, causing the rendered primitives to occupy less screen space and appear smaller. This is how computers simulate a perspective view.<br />
<br />
As in reality, motion toward or away from the eye has a less profound effect for objects that are already in the distance. For example, if you move six inches closer to the computer screen in front of your face, it's apparent size should increase quite dramatically. On the other hand, if the computer screen were already 20 feet away from you, moving six inches closer would have little noticeable impact on its apparent size. The perspective divide takes this into account.<br />
<br />
As part of the perspective divide, Z is also divided by W with the same results. For objects that are already close to the back of the view volume, a change in distance of one coordinate unit has less impact on Z/W than if the object is near the front of the view volume. To put it another way, an object coordinate Z unit occupies a larger slice of NDC-depth space close to the front of the view volume than it does near the back of the view volume.<br />
<br />
In summary, the perspective divide, by its nature, causes more Z precision close to the front of the view volume than near the back.<br />
<br />
A previous question in this section contains related information.<br />
<br />
== There is no way that a standard-sized depth buffer will have enough precision for my astronomically large scene. What are my options? ==<br />
<br />
The typical approach is to use a multipass technique. The application might divide the geometry database into regions that don't interfere with each other in Z. The geometry in each region is then rendered, starting at the furthest region, with a clear of the depth buffer before each region is rendered. This way the precision of the entire depth buffer is made available to each region.<br />
<br />
<br />
[[Category:Depth Buffering]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Depth_Buffer_Precision&diff=1554Depth Buffer Precision2006-04-30T18:57:09Z<p>Marco: </p>
<hr />
<div>== Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on? ==<br />
<br />
You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically. Moving the zFar clipping plane further away from the eye always has a negative impact on depth buffer precision, but it's not one as dramatic as moving the zNear clipping plane.<br />
<br />
The OpenGL Reference Manual description for glFrustum() relates depth precision to the zNear and zFar clipping planes by saying that roughly log2(zFar/zNear) bits of precision are lost. Clearly, as zNear approaches zero, this equation approaches infinity.<br />
<br />
While the blue book description is good at pointing out the relationship, it's somewhat inaccurate. As the ratio (zFar/zNear) increases, less precision is available near the back of the depth buffer and more precision is available close to the front of the depth buffer. So primitives are more likely to interact in Z if they are further from the viewer.<br />
<br />
It's possible that you simply don't have enough precision in your depth buffer to render your scene. See the last question in this section for more info.<br />
<br />
It's also possible that you are drawing coplanar primitives. Round-off errors or differences in rasterization typically create "Z fighting" for coplanar primitives. Here are some [[Drawing Lines over Polygons]].<br />
<br />
== Why is my depth buffer precision so poor? ==<br />
<br />
The depth buffer precision in eye coordinates is strongly affected by the ratio of zFar to zNear, the zFar clipping plane, and how far an object is from the zNear clipping plane.<br />
<br />
You need to do whatever you can to push the zNear clipping plane out and pull the zFar plane in as much as possible.<br />
<br />
To be more specific, consider the transformation of depth from eye coordinates<br />
<br />
x<sub>e</sub>, y<sub>e</sub>, z<sub>e</sub>, w<sub>e</sub><br />
<br />
to window coordinates<br />
<br />
x<sub>w</sub>, y<sub>w</sub>, z<sub>w</sub><br />
<br />
with a perspective projection matrix specified by<br />
<br />
glFrustum(l, r, b, t, n, f);<br />
<br />
and assume the default viewport transform. The clip coordinates of zc and wc are<br />
<br />
z<sub>c</sub> = -z<sub>e</sub>* (f+n)/(f-n) - w<sub>e</sub>* 2*f*n/(f-n)<br />
w<sub>c</sub> = -z<sub>e</sub><br />
<br />
Why the negations? OpenGL wants to present to the programmer a right-handed coordinate system before projection and left-handed coordinate system after projection.<br />
<br />
and the ndc coordinate:<br />
<br />
z<sub>ndc</sub> =&nbsp;z<sub>c</sub> / w<sub>c</sub> = [ -z<sub>e</sub> * (f+n)/(f-n) - w<sub>e</sub> * 2*f*n/(f-n) ] / -z<sub>e</sub><br />
= (f+n)/(f-n) + (w<sub>e</sub> / z<sub>e</sub>) * 2*f*n/(f-n)<br />
<br />
The viewport transformation scales and offsets by the depth range (Assume it to be [0, 1]) and then scales by s = (2n-1) where n is the bit depth of the depth buffer:<br />
<br />
z<sub>w</sub> = s * [ (w<sub>e</sub> / z<sub>e</sub>) * f*n/(f-n) + 0.5 * (f+n)/(f-n) + 0.5 ]<br />
<br />
Let's rearrange this equation to express ze / we as a function of zw<br />
<br />
z<sub>e</sub> / w<sub>e</sub> = f*n/(f-n) / ((z<sub>w</sub> / s) - 0.5 * (f+n)/(f-n) - 0.5)<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - 0.5 * (f+n) - 0.5 * (f-n))<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - f) [*]<br />
<br />
Now let's look at two points, the zNear clipping plane and the zFar clipping plane:<br />
<br />
z<sub>w</sub> = 0&nbsp;=&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (-f) = -n<br />
z<sub>w</sub> = s =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((f-n) - f) = -f<br />
<br />
In a fixed-point depth buffer, zw is quantized to integers. The next representable z buffer depth away from the clip planes are 1 and s-1:<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((1/s) * (f-n) - f)<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (((s-1)/s) * (f-n) - f)<br />
<br />
Now let's plug in some numbers, for example, n = 0.01, f = 1000 and s = 65535 (i.e., a 16-bit depth buffer)<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -0.01000015<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -395.90054<br />
<br />
Think about this last line. Everything at eye coordinate depths from -395.9 to -1000 has to map into either 65534 or 65535 in the z buffer. Almost two thirds of the distance between the zNear and zFar clipping planes will have one of two z-buffer values!<br />
<br />
To further analyze the z-buffer resolution, let's take the derivative of [*] with respect to zw<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * n * (f-n) * (1/s) / ((z<sub>w</sub> / s) * (f-n) - f)<sup>2</sup><br />
<br />
Now evaluate it at zw = s<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * (f-n) * (1/s) / n<br />
= - f * (f/n-1) / s [**]<br />
<br />
If you want your depth buffer to be useful near the zFar clipping plane, you need to keep this value to less than the size of your objects in eye space (for most practical uses, world space).<br />
<br />
== Why is there more precision at the front of the depth buffer? ==<br />
<br />
After the projection matrix transforms the clip coordinates, the XYZ-vertex values are divided by their clip coordinate W value, which results in normalized device coordinates. This step is known as the perspective divide. The clip coordinate W value represents the distance from the eye. As the distance from the eye increases, 1/W approaches 0. Therefore, X/W and Y/W also approach zero, causing the rendered primitives to occupy less screen space and appear smaller. This is how computers simulate a perspective view.<br />
<br />
As in reality, motion toward or away from the eye has a less profound effect for objects that are already in the distance. For example, if you move six inches closer to the computer screen in front of your face, it's apparent size should increase quite dramatically. On the other hand, if the computer screen were already 20 feet away from you, moving six inches closer would have little noticeable impact on its apparent size. The perspective divide takes this into account.<br />
<br />
As part of the perspective divide, Z is also divided by W with the same results. For objects that are already close to the back of the view volume, a change in distance of one coordinate unit has less impact on Z/W than if the object is near the front of the view volume. To put it another way, an object coordinate Z unit occupies a larger slice of NDC-depth space close to the front of the view volume than it does near the back of the view volume.<br />
<br />
In summary, the perspective divide, by its nature, causes more Z precision close to the front of the view volume than near the back.<br />
<br />
A previous question in this section contains related information.<br />
<br />
[[Category:Depth Buffering]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Depth_Buffer_Precision&diff=1553Depth Buffer Precision2006-04-30T18:51:29Z<p>Marco: </p>
<hr />
<div>== Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on? ==<br />
<br />
You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically. Moving the zFar clipping plane further away from the eye always has a negative impact on depth buffer precision, but it's not one as dramatic as moving the zNear clipping plane.<br />
<br />
The OpenGL Reference Manual description for glFrustum() relates depth precision to the zNear and zFar clipping planes by saying that roughly log2(zFar/zNear) bits of precision are lost. Clearly, as zNear approaches zero, this equation approaches infinity.<br />
<br />
While the blue book description is good at pointing out the relationship, it's somewhat inaccurate. As the ratio (zFar/zNear) increases, less precision is available near the back of the depth buffer and more precision is available close to the front of the depth buffer. So primitives are more likely to interact in Z if they are further from the viewer.<br />
<br />
It's possible that you simply don't have enough precision in your depth buffer to render your scene. See the last question in this section for more info.<br />
<br />
It's also possible that you are drawing coplanar primitives. Round-off errors or differences in rasterization typically create "Z fighting" for coplanar primitives. Here are some [[Drawing Lines over Polygons]].<br />
<br />
== Why is my depth buffer precision so poor? ==<br />
<br />
The depth buffer precision in eye coordinates is strongly affected by the ratio of zFar to zNear, the zFar clipping plane, and how far an object is from the zNear clipping plane.<br />
<br />
You need to do whatever you can to push the zNear clipping plane out and pull the zFar plane in as much as possible.<br />
<br />
To be more specific, consider the transformation of depth from eye coordinates<br />
<br />
x<sub>e</sub>, y<sub>e</sub>, z<sub>e</sub>, w<sub>e</sub><br />
<br />
to window coordinates<br />
<br />
x<sub>w</sub>, y<sub>w</sub>, z<sub>w</sub><br />
<br />
with a perspective projection matrix specified by<br />
<br />
glFrustum(l, r, b, t, n, f);<br />
<br />
and assume the default viewport transform. The clip coordinates of zc and wc are<br />
<br />
z<sub>c</sub> = -z<sub>e</sub>* (f+n)/(f-n) - w<sub>e</sub>* 2*f*n/(f-n)<br />
w<sub>c</sub> = -z<sub>e</sub><br />
<br />
Why the negations? OpenGL wants to present to the programmer a right-handed coordinate system before projection and left-handed coordinate system after projection.<br />
<br />
and the ndc coordinate:<br />
<br />
z<sub>ndc</sub> =&nbsp;z<sub>c</sub> / w<sub>c</sub> = [ -z<sub>e</sub> * (f+n)/(f-n) - w<sub>e</sub> * 2*f*n/(f-n) ] / -z<sub>e</sub><br />
= (f+n)/(f-n) + (w<sub>e</sub> / z<sub>e</sub>) * 2*f*n/(f-n)<br />
<br />
The viewport transformation scales and offsets by the depth range (Assume it to be [0, 1]) and then scales by s = (2n-1) where n is the bit depth of the depth buffer:<br />
<br />
z<sub>w</sub> = s * [ (w<sub>e</sub> / z<sub>e</sub>) * f*n/(f-n) + 0.5 * (f+n)/(f-n) + 0.5 ]<br />
<br />
Let's rearrange this equation to express ze / we as a function of zw<br />
<br />
z<sub>e</sub> / w<sub>e</sub> = f*n/(f-n) / ((z<sub>w</sub> / s) - 0.5 * (f+n)/(f-n) - 0.5)<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - 0.5 * (f+n) - 0.5 * (f-n))<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - f) [*]<br />
<br />
Now let's look at two points, the zNear clipping plane and the zFar clipping plane:<br />
<br />
z<sub>w</sub> = 0&nbsp;=&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (-f) = -n<br />
z<sub>w</sub> = s =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((f-n) - f) = -f<br />
<br />
In a fixed-point depth buffer, zw is quantized to integers. The next representable z buffer depth away from the clip planes are 1 and s-1:<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((1/s) * (f-n) - f)<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (((s-1)/s) * (f-n) - f)<br />
<br />
Now let's plug in some numbers, for example, n = 0.01, f = 1000 and s = 65535 (i.e., a 16-bit depth buffer)<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -0.01000015<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -395.90054<br />
<br />
Think about this last line. Everything at eye coordinate depths from -395.9 to -1000 has to map into either 65534 or 65535 in the z buffer. Almost two thirds of the distance between the zNear and zFar clipping planes will have one of two z-buffer values!<br />
<br />
To further analyze the z-buffer resolution, let's take the derivative of [*] with respect to zw<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * n * (f-n) * (1/s) / ((z<sub>w</sub> / s) * (f-n) - f)<sup>2</sup><br />
<br />
Now evaluate it at zw = s<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * (f-n) * (1/s) / n<br />
= - f * (f/n-1) / s [**]<br />
<br />
If you want your depth buffer to be useful near the zFar clipping plane, you need to keep this value to less than the size of your objects in eye space (for most practical uses, world space).<br />
<br />
[[Category:Depth Buffering]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Depth_Buffer_Precision&diff=1552Depth Buffer Precision2006-04-30T18:46:10Z<p>Marco: </p>
<hr />
<div>You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically. Moving the zFar clipping plane further away from the eye always has a negative impact on depth buffer precision, but it's not one as dramatic as moving the zNear clipping plane.<br />
<br />
The OpenGL Reference Manual description for glFrustum() relates depth precision to the zNear and zFar clipping planes by saying that roughly log2(zFar/zNear) bits of precision are lost. Clearly, as zNear approaches zero, this equation approaches infinity.<br />
<br />
While the blue book description is good at pointing out the relationship, it's somewhat inaccurate. As the ratio (zFar/zNear) increases, less precision is available near the back of the depth buffer and more precision is available close to the front of the depth buffer. So primitives are more likely to interact in Z if they are further from the viewer.<br />
<br />
It's possible that you simply don't have enough precision in your depth buffer to render your scene. See the last question in this section for more info.<br />
<br />
It's also possible that you are drawing coplanar primitives. Round-off errors or differences in rasterization typically create "Z fighting" for coplanar primitives. Here are some [[Drawing Lines over Polygons]].</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Category:General_OpenGL&diff=1551Category:General OpenGL2006-04-30T18:44:14Z<p>Marco: </p>
<hr />
<div>[[General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Write_Image_To_Depth_Buffer&diff=1550Write Image To Depth Buffer2006-04-30T18:40:25Z<p>Marco: </p>
<hr />
<div>Use the glDrawPixels() command, with the format parameter set to GL_DEPTH_COMPONENT. You may want to mask off the color buffer when you do this, with a call to glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); .<br />
<br />
[[Category:Depth Buffering]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Depth_Buffer_and_Perspective_Rendering&diff=1549Depth Buffer and Perspective Rendering2006-04-30T18:38:25Z<p>Marco: </p>
<hr />
<div>Make sure the zNear and zFar clipping planes are specified correctly in your calls to glFrustum() or gluPerspective().<br />
<br />
A mistake many programmers make is to specify a zNear clipping plane value of 0.0 or a negative value which isn't allowed. Both the zNear and zFar clipping planes are positive (not zero or negative) values that represent distances in front of the eye.<br />
<br />
Specifying a zNear clipping plane value of 0.0 to gluPerspective() won't generate an OpenGL error, but it might cause depth buffering to act as if it's disabled. A negative zNear or zFar clipping plane value would produce undesirable results.<br />
<br />
A zNear or zFar clipping plane value of zero or negative, when passed to glFrustum(), will produce an error that you can retrieve by calling glGetError(). The function will then act as a no-op.<br />
<br />
[[Category:Depth Buffering]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Enable_Depth_Buffering&diff=1547Enable Depth Buffering2006-04-30T18:36:43Z<p>Marco: Enable Depthbuffering moved to Enable Depth Buffering</p>
<hr />
<div>Your application needs to do at least the following to get depth buffering to work:<br />
<br />
# Ask for a depth buffer when you create your window.<br />
# Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.<br />
# Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.<br />
# Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT. <br />
<br />
There are a number of OpenGL example programs available on the Web, which use depth buffering. If you're having trouble getting depth buffering to work correctly, you might benefit from looking at an example program to see what is done differently. This FAQ contains [http://www.opengl.org/resources/faq/technical/gettingstarted.htm#gett0002 links to several web sites that have example OpenGL code].<br />
<br />
[[Category:Depth Buffering]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Enable_Depthbuffering&diff=1548Enable Depthbuffering2006-04-30T18:36:43Z<p>Marco: Enable Depthbuffering moved to Enable Depth Buffering</p>
<hr />
<div>#redirect [[Enable Depth Buffering]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Category:Depth_Buffering&diff=1546Category:Depth Buffering2006-04-30T18:36:04Z<p>Marco: </p>
<hr />
<div>[[:Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Enable_Depth_Buffering&diff=1545Enable Depth Buffering2006-04-30T18:35:09Z<p>Marco: </p>
<hr />
<div>Your application needs to do at least the following to get depth buffering to work:<br />
<br />
# Ask for a depth buffer when you create your window.<br />
# Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.<br />
# Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.<br />
# Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT. <br />
<br />
There are a number of OpenGL example programs available on the Web, which use depth buffering. If you're having trouble getting depth buffering to work correctly, you might benefit from looking at an example program to see what is done differently. This FAQ contains [http://www.opengl.org/resources/faq/technical/gettingstarted.htm#gett0002 links to several web sites that have example OpenGL code].<br />
<br />
[[Category:Depth Buffering]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=ID-Buffer&diff=1544ID-Buffer2006-04-30T17:08:44Z<p>Marco: </p>
<hr />
<div>{{stub}}<br />
<br />
[[Image:Simple_opengl_pipeline_2.png|frame|The OpenGL pipeline]]<br />
<br />
[[Category:Algorithm]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Category:Polygon_Offset&diff=1543Category:Polygon Offset2006-04-30T16:49:35Z<p>Marco: </p>
<hr />
<div>Polygon Offset is a way to set the depth value of a fragment.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Drawing_Coplanar_Primitives_Widthout_Polygon_Offset&diff=1542Drawing Coplanar Primitives Widthout Polygon Offset2006-04-30T16:48:40Z<p>Marco: </p>
<hr />
<div>You can simulate the effects of polygon offset by tinkering with glDepthRange(). For example, you might code the following:<br />
<br />
<pre> glDepthRange (0.1, 1.0);<br />
/* Draw underlying geometry */<br />
glDepthRange (0.0, 0.9);<br />
/* Draw overlying geometry */</pre><br />
<br />
This code provides a fixed offset in Z, but doesn't account for the polygon slope. It's roughly equivalent to using glPolygonOffset with a factor parameter of 0.0.<br />
<br />
You can render coplanar primitives with the Stencil buffer in many creative ways. The OpenGL Programming Guide outlines one well-know method. The algorithm for drawing a polygon and its outline is as follows:<br />
<br />
# Draw the outline into the color, depth, and stencil buffers.<br />
# Draw the filled primitive into the color buffer and depth buffer, but only where the stencil buffer is clear.<br />
# Mask off the color and depth buffers, and render the outline to clear the stencil buffer.<br />
<br />
On some SGI OpenGL platforms, an application can use the SGIX_reference_plane extension. With this extension, the user specifies a plane equation in object coordinates corresponding to a set of coplanar primitives. You can enable or disable the plane. When the plane is enabled, all fragment Z values will derive from the specified plane equation. Thus, for any given fragment XY location, the depth value is guaranteed to be identical regardless of which primitive rendered it.<br />
<br />
[[Category:Polygon Offset]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Drawing_Lines_over_Polygons&diff=1541Drawing Lines over Polygons2006-04-30T16:48:23Z<p>Marco: change layout</p>
<hr />
<div>; [[Basics Of Polygon Offset]] : What are the basics for using polygon offset? <br />
; [[Parameters of Polygon Offset]] : What are the two parameters in a glPolygonOffset() call and what do they mean? <br />
; [[Different Specification Of Polygon Offset]] : What's the difference between the OpenGL 1.0 polygon offset extension and OpenGL 1.1 (and later) polygon offset interfaces?<br />
; [[Polygon Offset and Point and Lines]] : Why doesn't polygon offset work when I draw line primitives over filled primitives?<br />
; [[Drawing Coplanar Primitives Widthout Polygon Offset]] : What other options do I have for drawing coplanar primitives when I don't want to use polygon offset?<br />
<br />
[[Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Polygon_Offset_and_Point_and_Lines&diff=1540Polygon Offset and Point and Lines2006-04-30T16:45:49Z<p>Marco: </p>
<hr />
<div>Polygon offset, as its name implies, only works with polygonal primitives. It affects only the filled primitives: GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP, and GL_POLYGON. Polygon offset will work when you render them with glPolygonMode set to GL_FILL, GL_LINE, or GL_POINT.<br />
<br />
Polygon offset doesn't affect non-polygonal primitives. The GL_POINTS, GL_LINES, GL_LINE_STRIP, and GL_LINE_LOOP primitives can't be offset with glPolygonOffset().<br />
<br />
[[Category:Polygon Offset]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Different_Specification_Of_Polygon_Offset&diff=1539Different Specification Of Polygon Offset2006-04-30T16:42:46Z<p>Marco: </p>
<hr />
<div>The 1.0 polygon offset extension didn't let you apply the offset to filled primitives in line or point mode. Only filled primitives in fill mode could be offset.<br />
<br />
In the 1.0 extension, a bias parameter was added to the normalized (0.0 - 1.0) depth value, in place of the 1.1 units parameter. Typical applications might obtain a good offset by specifying a bias of 0.001.<br />
<br />
See the [http://www.opengl.org/resources/faq/technical/pgonoff.c GLUT example], which renders two cylinders, one using the 1.0 polygon offset extension and the other using the 1.1 polygon offset interface.<br />
<br />
[[Category:Polygon Offset]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Parameters_of_Polygon_Offset&diff=1538Parameters of Polygon Offset2006-04-30T16:41:09Z<p>Marco: </p>
<hr />
<div>Polygon offset allows the application to specify a depth offset with two parameters, factor and units. factor scales the maximum Z slope, with respect to X or Y of the polygon, and units scales the minimum resolvable depth buffer value. The results are summed to produce the depth offset. This offset is applied in screen space, typically with positive Z pointing into the screen.<br />
<br />
The factor parameter is required to ensure correct results for filled primitives that are nearly edge-on to the viewer. In this case, the difference between Z values for the same pixel generated by two coplanar primitives can be as great as the maximum Z slope in X or Y. This Z slope will be large for nearly edge-on primitives, and almost non-existent for face-on primitives. The factor parameter lets you add this type of variable difference into the resulting depth offset.<br />
<br />
A typical use might be to set factor and units to 1.0 to offset primitives into positive Z (into the screen) and enable polygon offset for fill mode. Two passes are then made, once with the model's solid geometry and once again with the line geometry. Nearly edge-on filled polygons are pushed substantially away from the eyepoint, to minimize interference with the line geometry, while nearly planar polygons are drawn at least one depth buffer unit behind the line geometry.<br />
<br />
[[Category:Polygon Offset]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Basics_Of_Polygon_Offset&diff=1537Basics Of Polygon Offset2006-04-30T16:39:22Z<p>Marco: </p>
<hr />
<div>It's difficult to render coplanar primitives in OpenGL for two reasons:<br />
<br />
* Given two overlapping coplanar primitives with different vertices, floating point round-off errors from the two polygons can generate different depth values for overlapping pixels. With depth test enabled, some of the second polygon's pixels will pass the depth test, while some will fail.<br />
* For coplanar lines and polygons, vastly different depth values for common pixels can result. This is because depth values from polygon rasterization derive from the polygon's plane equation, while depth values from line rasterization derive from linear interpolation.<br />
<br />
Setting the depth function to GL_LEQUAL or GL_EQUAL won't resolve the problem. The visual result is referred to as stitching, bleeding, or Z fighting.<br />
<br />
Polygon offset was an extension to OpenGL 1.0, and is now incorporated into OpenGL 1.1. It allows an application to define a depth offset, which can apply to filled primitives, and under OpenGL 1.1, it can be separately enabled or disabled depending on whether the primitives are rendered in fill, line, or point mode. Thus, an application can render coplanar primitives by first rendering one primitive, then by applying an offset and rendering the second primitive.<br />
<br />
While polygon offset can alter the depth value of filled primitives in point and line mode, under no circumstances will polygon offset affect the depth values of GL_POINTS, GL_LINES, GL_LINE_STRIP, or GL_LINE_LOOP primitives. If you are trying to render point or line primitives over filled primitives, use polygon offset to push the filled primitives back. (It can't be used to pull the point and line primitives forward.)<br />
<br />
Because polygon offset alters the correct Z value calculated during rasterization, the resulting Z value, which is stored in the depth buffer will contain this offset and can adversely affect the resulting image. In many circumstances, undesirable "bleed-through" effects can result. Indeed, polygon offset may cause some primitives to pass the depth test entirely when they normally would not, or vice versa. When models intersect, polygon offset can cause an inaccurate rendering of the intersection point.<br />
<br />
[[Category:Polygon Offset]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Color&diff=1536FAQ/Color2006-04-30T16:07:44Z<p>Marco: change layout</p>
<hr />
<div>; [[Reverse Color in Textur]] : My texture map colors reverse blue and red, yellow and cyan, etc. What's happening?<br />
; [[Render Color index in RGB window]] : How do I render a color index into an RGB window or vice versa? <br />
; [[Missing Colors]] : The colors are almost entirely missing when I render in Microsoft Windows. What's happening? <br />
; [[Specify An Exact Color]] : How do I specify an exact color for a primitive?<br />
; [[Unique Color For Every Primitiv]] : How do I render each primitive in a unique color? <br />
<br />
[[Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Unique_Color_For_Every_Primitive&diff=1535Unique Color For Every Primitive2006-04-30T16:07:14Z<p>Marco: </p>
<hr />
<div>You need to know the depth of each component in your color buffer. The previous question contains the code to obtain these values. The depth tells you the number of unique color values you can render. For example, if you use the code from the previous question, which retrieves the color depth in redBits, greenBits, and blueBits, the number of unique colors available is 2^(redBits+greenBits+blueBits).<br />
<br />
If this number is greater than the number of primitives you want to render, there is no problem. You need to use glColor3ui() (or glColor3us(), etc) to specify each color, and store the desired color in the most significant bits of each parameter. You can code a loop to render each primitive in a unique color with the following:<br />
<br />
<pre> /*<br />
Given: numPrims is the number of primitives to render.<br />
Given void renderPrimitive(unsigned long) is a routine to render the primitive specified by the given parameter index.<br />
Given GLuint makeMask (GLint) returns a bit mask for the number of bits specified.<br />
*/<br />
<br />
GLuint redMask = makeMask(redBits) << (greenBits + blueBits);<br />
GLuint greenMask = makeMask(greenBits) << blueBits;<br />
GLuint blueMask = makeMask(blueBits);<br />
int redShift = 32 - (redBits+greenBits+blueBits);<br />
int greenShift = 32 - (greenBits+blueBits);<br />
int blueShift = 32 - blueBits;<br />
unsigned long indx;<br />
<br />
for (indx=0; indx<numPrims, indx++) {<br />
glColor3ui (indx & redMask << redShift,<br />
indx & greenMask << greenShift,<br />
indx & blueMask << blueShift);<br />
renderPrimitive (indx);<br />
}</pre><br />
<br />
Also, make sure you disable any state that could alter the final color. See the question above for a code snippet to accomplish this.<br />
<br />
If you're using this for picking instead of the ususal Selection feature, any color subsequently read back from the color buffer can easily be converted to the index value of the primitive rendered in that color.<br />
<br />
[[Category:Color]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Specify_An_Exact_Color&diff=1534Specify An Exact Color2006-04-30T16:05:22Z<p>Marco: </p>
<hr />
<div>First, you'll need to know the depth of the color buffer you are rendering to. For an RGB color buffer, you can obtain these values with the following code:<br />
<br />
<pre> GLint redBits, greenBits, blueBits;<br />
<br />
glGetIntegerv (GL_RED_BITS, &redBits);<br />
glGetIntegerv (GL_GREEN_BITS, &greenBits);<br />
glGetIntegerv (GL_BLUE_BITS, &blueBits);</pre><br />
<br />
If the depth value for each component is at least as large as your required color precision, you can specify an exact color for your primitives. Specify the color you want to use into the most significant bits of three unsigned integers and use glColor3ui() to specify the color.<br />
<br />
If your color buffer isn't deep enough to accurately represent the color you desire, you'll need a fallback strategy. Trimming off the least significant bits of each color component is an acceptable alternative. Again, use glColor3ui() (or glColor3us(), etc.) to specify the color with your values stored in the most significant bits of each parameter.<br />
<br />
In either event, you'll need to ensure that any state that could affect the final color has been disabled. The following code will accomplish this:<br />
<br />
<pre> glDisable (GL_BLEND);<br />
glDisable (GL_DITHER);<br />
glDisable (GL_FOG);<br />
glDisable (GL_LIGHTING);<br />
glDisable (GL_TEXTURE_1D);<br />
glDisable (GL_TEXTURE_2D);<br />
glDisable (GL_TEXTURE_3D);<br />
glShadeModel (GL_FLAT);</pre><br />
<br />
[[Category:Color]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Missing_Colors&diff=1533Missing Colors2006-04-30T16:03:39Z<p>Marco: </p>
<hr />
<div>The most probable cause is that the Windows display is set to 256 colors. To change it, you can increase the color depth by clicking the right mouse button on the desktop, then select Properties, the Settings tab, and change the number of colors in the Color Palette to a higher number.<br />
<br />
[[Category:Color]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Category:Color&diff=1532Category:Color2006-04-30T16:01:49Z<p>Marco: </p>
<hr />
<div>This is a subcategory of [[:Category:General OpenGL]].</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Render_Color_index_in_RGB_window&diff=1531Render Color index in RGB window2006-04-30T15:59:53Z<p>Marco: </p>
<hr />
<div>There isn't a way to do this. However, you might consider opening an RGB window with a color index overlay plane, if it works in your application.<br />
<br />
If you have an array of color indices that you want to use as a texture map, you might want to consider using GL_EXT_paletted_texture, which lets an application specify a color index texture map with a color palette.<br />
<br />
[[Category:Color]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Reverse_Color_in_Texture&diff=1530Reverse Color in Texture2006-04-30T15:57:47Z<p>Marco: </p>
<hr />
<div>Your texture image has the reverse byte ordering of what OpenGL is expecting. One way to handle this is to swap bytes within your code before passing the data to OpenGL.<br />
<br />
Under OpenGL 1.2, you may specify GL_BGR or GL_BGRA as the "format" parameter to glDrawPixels(), glGetTexImage(), glReadPixels(), glTexImage1D(), glTexImage2D(), and glTexImage3D(). In previous versions of OpenGL, this functionality might be available in the form of the EXT_bgra extension (using GL_BGR_EXT and GL_BGRA_EXT as the "format" parameter).<br />
<br />
[[Category:Color]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Color&diff=1529FAQ/Color2006-04-30T15:51:05Z<p>Marco: add Category</p>
<hr />
<div>===== My texture map colors reverse blue and red, yellow and cyan, etc. What's happening? =====<br />
<br />
Your texture image has the reverse byte ordering of what OpenGL is expecting. One way to handle this is to swap bytes within your code before passing the data to OpenGL.<br />
<br />
Under OpenGL 1.2, you may specify GL_BGR or GL_BGRA as the "format" parameter to glDrawPixels(), glGetTexImage(), glReadPixels(), glTexImage1D(), glTexImage2D(), and glTexImage3D(). In previous versions of OpenGL, this functionality might be available in the form of the EXT_bgra extension (using GL_BGR_EXT and GL_BGRA_EXT as the "format" parameter).<br />
<br />
===== How do I render a color index into an RGB window or vice versa? =====<br />
<br />
There isn't a way to do this. However, you might consider opening an RGB window with a color index overlay plane, if it works in your application.<br />
<br />
If you have an array of color indices that you want to use as a texture map, you might want to consider using GL_EXT_paletted_texture, which lets an application specify a color index texture map with a color palette.<br />
<br />
===== The colors are almost entirely missing when I render in Microsoft Windows. What's happening? =====<br />
<br />
The most probable cause is that the Windows display is set to 256 colors. To change it, you can increase the color depth by clicking the right mouse button on the desktop, then select Properties, the Settings tab, and change the number of colors in the Color Palette to a higher number.<br />
<br />
===== How do I specify an exact color for a primitive? =====<br />
<br />
First, you'll need to know the depth of the color buffer you are rendering to. For an RGB color buffer, you can obtain these values with the following code:<br />
<br />
<pre> GLint redBits, greenBits, blueBits;<br />
<br />
glGetIntegerv (GL_RED_BITS, &redBits);<br />
glGetIntegerv (GL_GREEN_BITS, &greenBits);<br />
glGetIntegerv (GL_BLUE_BITS, &blueBits);</pre><br />
<br />
If the depth value for each component is at least as large as your required color precision, you can specify an exact color for your primitives. Specify the color you want to use into the most significant bits of three unsigned integers and use glColor3ui() to specify the color.<br />
<br />
If your color buffer isn't deep enough to accurately represent the color you desire, you'll need a fallback strategy. Trimming off the least significant bits of each color component is an acceptable alternative. Again, use glColor3ui() (or glColor3us(), etc.) to specify the color with your values stored in the most significant bits of each parameter.<br />
<br />
In either event, you'll need to ensure that any state that could affect the final color has been disabled. The following code will accomplish this:<br />
<br />
<pre> glDisable (GL_BLEND);<br />
glDisable (GL_DITHER);<br />
glDisable (GL_FOG);<br />
glDisable (GL_LIGHTING);<br />
glDisable (GL_TEXTURE_1D);<br />
glDisable (GL_TEXTURE_2D);<br />
glDisable (GL_TEXTURE_3D);<br />
glShadeModel (GL_FLAT);</pre><br />
<br />
===== How do I render each primitive in a unique color? =====<br />
<br />
You need to know the depth of each component in your color buffer. The previous question contains the code to obtain these values. The depth tells you the number of unique color values you can render. For example, if you use the code from the previous question, which retrieves the color depth in redBits, greenBits, and blueBits, the number of unique colors available is 2^(redBits+greenBits+blueBits).<br />
<br />
If this number is greater than the number of primitives you want to render, there is no problem. You need to use glColor3ui() (or glColor3us(), etc) to specify each color, and store the desired color in the most significant bits of each parameter. You can code a loop to render each primitive in a unique color with the following:<br />
<br />
<pre> /*<br />
Given: numPrims is the number of primitives to render.<br />
Given void renderPrimitive(unsigned long) is a routine to render the primitive specified by the given parameter index.<br />
Given GLuint makeMask (GLint) returns a bit mask for the number of bits specified.<br />
*/<br />
<br />
GLuint redMask = makeMask(redBits) << (greenBits + blueBits);<br />
GLuint greenMask = makeMask(greenBits) << blueBits;<br />
GLuint blueMask = makeMask(blueBits);<br />
int redShift = 32 - (redBits+greenBits+blueBits);<br />
int greenShift = 32 - (greenBits+blueBits);<br />
int blueShift = 32 - blueBits;<br />
unsigned long indx;<br />
<br />
for (indx=0; indx<numPrims, indx++) {<br />
glColor3ui (indx & redMask << redShift,<br />
indx & greenMask << greenShift,<br />
indx & blueMask << blueShift);<br />
renderPrimitive (indx);<br />
}</pre><br />
<br />
Also, make sure you disable any state that could alter the final color. See the question above for a code snippet to accomplish this.<br />
<br />
If you're using this for picking instead of the ususal Selection feature, any color subsequently read back from the color buffer can easily be converted to the index value of the primitive rendered in that color.<br />
<br />
[[Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Depth_Buffer&diff=1528FAQ/Depth Buffer2006-04-30T15:49:39Z<p>Marco: add Category</p>
<hr />
<div>===== How do I make depth buffering work? =====<br />
<br />
Your application needs to do at least the following to get depth buffering to work:<br />
<br />
# Ask for a depth buffer when you create your window.<br />
# Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.<br />
# Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.<br />
# Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT. <br />
<br />
There are a number of OpenGL example programs available on the Web, which use depth buffering. If you're having trouble getting depth buffering to work correctly, you might benefit from looking at an example program to see what is done differently. This FAQ contains [http://www.opengl.org/resources/faq/technical/gettingstarted.htm#gett0002 links to several web sites that have example OpenGL code].<br />
<br />
===== Depth buffering doesn't work in my perspective rendering. What's going on? =====<br />
<br />
Make sure the zNear and zFar clipping planes are specified correctly in your calls to glFrustum() or gluPerspective().<br />
<br />
A mistake many programmers make is to specify a zNear clipping plane value of 0.0 or a negative value which isn't allowed. Both the zNear and zFar clipping planes are positive (not zero or negative) values that represent distances in front of the eye.<br />
<br />
Specifying a zNear clipping plane value of 0.0 to gluPerspective() won't generate an OpenGL error, but it might cause depth buffering to act as if it's disabled. A negative zNear or zFar clipping plane value would produce undesirable results.<br />
<br />
A zNear or zFar clipping plane value of zero or negative, when passed to glFrustum(), will produce an error that you can retrieve by calling glGetError(). The function will then act as a no-op.<br />
<br />
===== How do I write a previously stored depth image to the depth buffer? =====<br />
<br />
Use the glDrawPixels() command, with the format parameter set to GL_DEPTH_COMPONENT. You may want to mask off the color buffer when you do this, with a call to glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); .<br />
<br />
===== Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on? =====<br />
<br />
You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically. Moving the zFar clipping plane further away from the eye always has a negative impact on depth buffer precision, but it's not one as dramatic as moving the zNear clipping plane.<br />
<br />
The OpenGL Reference Manual description for glFrustum() relates depth precision to the zNear and zFar clipping planes by saying that roughly log2(zFar/zNear) bits of precision are lost. Clearly, as zNear approaches zero, this equation approaches infinity.<br />
<br />
While the blue book description is good at pointing out the relationship, it's somewhat inaccurate. As the ratio (zFar/zNear) increases, less precision is available near the back of the depth buffer and more precision is available close to the front of the depth buffer. So primitives are more likely to interact in Z if they are further from the viewer.<br />
<br />
It's possible that you simply don't have enough precision in your depth buffer to render your scene. See the last question in this section for more info.<br />
<br />
It's also possible that you are drawing coplanar primitives. Round-off errors or differences in rasterization typically create "Z fighting" for coplanar primitives. Here are some [http://www.opengl.org/resources/faq/technical/polygonoffset.htm options to assist you when rendering coplanar primitives].<br />
<br />
===== Why is my depth buffer precision so poor? =====<br />
<br />
The depth buffer precision in eye coordinates is strongly affected by the ratio of zFar to zNear, the zFar clipping plane, and how far an object is from the zNear clipping plane.<br />
<br />
You need to do whatever you can to push the zNear clipping plane out and pull the zFar plane in as much as possible.<br />
<br />
To be more specific, consider the transformation of depth from eye coordinates<br />
<br />
x<sub>e</sub>, y<sub>e</sub>, z<sub>e</sub>, w<sub>e</sub><br />
<br />
to window coordinates<br />
<br />
x<sub>w</sub>, y<sub>w</sub>, z<sub>w</sub><br />
<br />
with a perspective projection matrix specified by<br />
<br />
glFrustum(l, r, b, t, n, f);<br />
<br />
and assume the default viewport transform. The clip coordinates of zc and wc are<br />
<br />
z<sub>c</sub> = -z<sub>e</sub>* (f+n)/(f-n) - w<sub>e</sub>* 2*f*n/(f-n)<br />
w<sub>c</sub> = -z<sub>e</sub><br />
<br />
Why the negations? OpenGL wants to present to the programmer a right-handed coordinate system before projection and left-handed coordinate system after projection.<br />
<br />
and the ndc coordinate:<br />
<br />
z<sub>ndc</sub> =&nbsp;z<sub>c</sub> / w<sub>c</sub> = [ -z<sub>e</sub> * (f+n)/(f-n) - w<sub>e</sub> * 2*f*n/(f-n) ] / -z<sub>e</sub><br />
= (f+n)/(f-n) + (w<sub>e</sub> / z<sub>e</sub>) * 2*f*n/(f-n)<br />
<br />
The viewport transformation scales and offsets by the depth range (Assume it to be [0, 1]) and then scales by s = (2n-1) where n is the bit depth of the depth buffer:<br />
<br />
z<sub>w</sub> = s * [ (w<sub>e</sub> / z<sub>e</sub>) * f*n/(f-n) + 0.5 * (f+n)/(f-n) + 0.5 ]<br />
<br />
Let's rearrange this equation to express ze / we as a function of zw<br />
<br />
z<sub>e</sub> / w<sub>e</sub> = f*n/(f-n) / ((z<sub>w</sub> / s) - 0.5 * (f+n)/(f-n) - 0.5)<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - 0.5 * (f+n) - 0.5 * (f-n))<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - f) [*]<br />
<br />
Now let's look at two points, the zNear clipping plane and the zFar clipping plane:<br />
<br />
z<sub>w</sub> = 0&nbsp;=&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (-f) = -n<br />
z<sub>w</sub> = s =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((f-n) - f) = -f<br />
<br />
In a fixed-point depth buffer, zw is quantized to integers. The next representable z buffer depth away from the clip planes are 1 and s-1:<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((1/s) * (f-n) - f)<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (((s-1)/s) * (f-n) - f)<br />
<br />
Now let's plug in some numbers, for example, n = 0.01, f = 1000 and s = 65535 (i.e., a 16-bit depth buffer)<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -0.01000015<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -395.90054<br />
<br />
Think about this last line. Everything at eye coordinate depths from -395.9 to -1000 has to map into either 65534 or 65535 in the z buffer. Almost two thirds of the distance between the zNear and zFar clipping planes will have one of two z-buffer values!<br />
<br />
To further analyze the z-buffer resolution, let's take the derivative of [*] with respect to zw<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * n * (f-n) * (1/s) / ((z<sub>w</sub> / s) * (f-n) - f)<sup>2</sup><br />
<br />
Now evaluate it at zw = s<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * (f-n) * (1/s) / n<br />
= - f * (f/n-1) / s [**]<br />
<br />
If you want your depth buffer to be useful near the zFar clipping plane, you need to keep this value to less than the size of your objects in eye space (for most practical uses, world space).<br />
<br />
===== How do I turn off the zNear clipping plane? =====<br />
<br />
See [http://www.opengl.org/resources/faq/technical/clipping.htm#0050 this question] in the Clipping section.<br />
<br />
===== Why is there more precision at the front of the depth buffer? =====<br />
<br />
After the projection matrix transforms the clip coordinates, the XYZ-vertex values are divided by their clip coordinate W value, which results in normalized device coordinates. This step is known as the perspective divide. The clip coordinate W value represents the distance from the eye. As the distance from the eye increases, 1/W approaches 0. Therefore, X/W and Y/W also approach zero, causing the rendered primitives to occupy less screen space and appear smaller. This is how computers simulate a perspective view.<br />
<br />
As in reality, motion toward or away from the eye has a less profound effect for objects that are already in the distance. For example, if you move six inches closer to the computer screen in front of your face, it's apparent size should increase quite dramatically. On the other hand, if the computer screen were already 20 feet away from you, moving six inches closer would have little noticeable impact on its apparent size. The perspective divide takes this into account.<br />
<br />
As part of the perspective divide, Z is also divided by W with the same results. For objects that are already close to the back of the view volume, a change in distance of one coordinate unit has less impact on Z/W than if the object is near the front of the view volume. To put it another way, an object coordinate Z unit occupies a larger slice of NDC-depth space close to the front of the view volume than it does near the back of the view volume.<br />
<br />
In summary, the perspective divide, by its nature, causes more Z precision close to the front of the view volume than near the back.<br />
<br />
A previous question in this section contains related information.<br />
<br />
===== There is no way that a standard-sized depth buffer will have enough precision for my astronomically large scene. What are my options? =====<br />
<br />
The typical approach is to use a multipass technique. The application might divide the geometry database into regions that don't interfere with each other in Z. The geometry in each region is then rendered, starting at the furthest region, with a clear of the depth buffer before each region is rendered. This way the precision of the entire depth buffer is made available to each region.<br />
<br />
[[Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=General_OpenGL&diff=1527General OpenGL2006-04-30T15:48:34Z<p>Marco: </p>
<hr />
<div>This section explains the basics of the OpenGL API and answers some of the most frequently asked questions about it.<br />
<br />
; [[Viewing and Transformations]] : Answers about Transformations.<br />
; [[Clipping, Culling, and Visibility Testing]]<br />
; [[Color]]<br />
; [[Depth Buffer]]<br />
; [[Texture Mapping]]<br />
; [[Drawing Lines over Polygons]] : Using Polygonoffset.<br />
; [[Rasterization and Operations on the Framebuffer]]<br />
; [[Transparency and Translucency]]<br />
; [[Display Lists and Vertex Arrays]]<br />
; [[Fonts]]<br />
; [[Lights and Shadows]]<br />
; [[Curves and Surfaces]]<br />
; [[Selection mechanismen]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Drawing_Lines_over_Polygons&diff=1526Drawing Lines over Polygons2006-04-30T15:48:07Z<p>Marco: </p>
<hr />
<div>===== What are the basics for using polygon offset? =====<br />
<br />
It's difficult to render coplanar primitives in OpenGL for two reasons:<br />
<br />
* Given two overlapping coplanar primitives with different vertices, floating point round-off errors from the two polygons can generate different depth values for overlapping pixels. With depth test enabled, some of the second polygon's pixels will pass the depth test, while some will fail.<br />
* For coplanar lines and polygons, vastly different depth values for common pixels can result. This is because depth values from polygon rasterization derive from the polygon's plane equation, while depth values from line rasterization derive from linear interpolation.<br />
<br />
Setting the depth function to GL_LEQUAL or GL_EQUAL won't resolve the problem. The visual result is referred to as stitching, bleeding, or Z fighting.<br />
<br />
Polygon offset was an extension to OpenGL 1.0, and is now incorporated into OpenGL 1.1. It allows an application to define a depth offset, which can apply to filled primitives, and under OpenGL 1.1, it can be separately enabled or disabled depending on whether the primitives are rendered in fill, line, or point mode. Thus, an application can render coplanar primitives by first rendering one primitive, then by applying an offset and rendering the second primitive.<br />
<br />
While polygon offset can alter the depth value of filled primitives in point and line mode, under no circumstances will polygon offset affect the depth values of GL_POINTS, GL_LINES, GL_LINE_STRIP, or GL_LINE_LOOP primitives. If you are trying to render point or line primitives over filled primitives, use polygon offset to push the filled primitives back. (It can't be used to pull the point and line primitives forward.)<br />
<br />
Because polygon offset alters the correct Z value calculated during rasterization, the resulting Z value, which is stored in the depth buffer will contain this offset and can adversely affect the resulting image. In many circumstances, undesirable "bleed-through" effects can result. Indeed, polygon offset may cause some primitives to pass the depth test entirely when they normally would not, or vice versa. When models intersect, polygon offset can cause an inaccurate rendering of the intersection point.<br />
<br />
===== What are the two parameters in a glPolygonOffset() call and what do they mean? =====<br />
<br />
Polygon offset allows the application to specify a depth offset with two parameters, factor and units. factor scales the maximum Z slope, with respect to X or Y of the polygon, and units scales the minimum resolvable depth buffer value. The results are summed to produce the depth offset. This offset is applied in screen space, typically with positive Z pointing into the screen.<br />
<br />
The factor parameter is required to ensure correct results for filled primitives that are nearly edge-on to the viewer. In this case, the difference between Z values for the same pixel generated by two coplanar primitives can be as great as the maximum Z slope in X or Y. This Z slope will be large for nearly edge-on primitives, and almost non-existent for face-on primitives. The factor parameter lets you add this type of variable difference into the resulting depth offset.<br />
<br />
A typical use might be to set factor and units to 1.0 to offset primitives into positive Z (into the screen) and enable polygon offset for fill mode. Two passes are then made, once with the model's solid geometry and once again with the line geometry. Nearly edge-on filled polygons are pushed substantially away from the eyepoint, to minimize interference with the line geometry, while nearly planar polygons are drawn at least one depth buffer unit behind the line geometry.<br />
<br />
===== What's the difference between the OpenGL 1.0 polygon offset extension and OpenGL 1.1 (and later) polygon offset interfaces? =====<br />
<br />
The 1.0 polygon offset extension didn't let you apply the offset to filled primitives in line or point mode. Only filled primitives in fill mode could be offset.<br />
<br />
In the 1.0 extension, a bias parameter was added to the normalized (0.0 - 1.0) depth value, in place of the 1.1 units parameter. Typical applications might obtain a good offset by specifying a bias of 0.001.<br />
<br />
See the [http://www.opengl.org/resources/faq/technical/pgonoff.c GLUT example], which renders two cylinders, one using the 1.0 polygon offset extension and the other using the 1.1 polygon offset interface.<br />
<br />
===== Why doesn't polygon offset work when I draw line primitives over filled primitives? =====<br />
<br />
Polygon offset, as its name implies, only works with polygonal primitives. It affects only the filled primitives: GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP, and GL_POLYGON. Polygon offset will work when you render them with glPolygonMode set to GL_FILL, GL_LINE, or GL_POINT.<br />
<br />
Polygon offset doesn't affect non-polygonal primitives. The GL_POINTS, GL_LINES, GL_LINE_STRIP, and GL_LINE_LOOP primitives can't be offset with glPolygonOffset().<br />
<br />
===== What other options do I have for drawing coplanar primitives when I don't want to use polygon offset? =====<br />
<br />
You can simulate the effects of polygon offset by tinkering with glDepthRange(). For example, you might code the following:<br />
<br />
<pre> glDepthRange (0.1, 1.0);<br />
/* Draw underlying geometry */<br />
glDepthRange (0.0, 0.9);<br />
/* Draw overlying geometry */</pre><br />
<br />
This code provides a fixed offset in Z, but doesn't account for the polygon slope. It's roughly equivalent to using glPolygonOffset with a factor parameter of 0.0.<br />
<br />
You can render coplanar primitives with the Stencil buffer in many creative ways. The OpenGL Programming Guide outlines one well-know method. The algorithm for drawing a polygon and its outline is as follows:<br />
<br />
# Draw the outline into the color, depth, and stencil buffers.<br />
# Draw the filled primitive into the color buffer and depth buffer, but only where the stencil buffer is clear.<br />
# Mask off the color and depth buffers, and render the outline to clear the stencil buffer.<br />
<br />
On some SGI OpenGL platforms, an application can use the SGIX_reference_plane extension. With this extension, the user specifies a plane equation in object coordinates corresponding to a set of coplanar primitives. You can enable or disable the plane. When the plane is enabled, all fragment Z values will derive from the specified plane equation. Thus, for any given fragment XY location, the depth value is guaranteed to be identical regardless of which primitive rendered it.<br />
<br />
[[Category:General OpenGL]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Drawing_Lines_over_Polygons&diff=1524Drawing Lines over Polygons2006-04-30T15:47:18Z<p>Marco: Drawing Lines over Polygons and Using Polygon Offset moved to Drawing Lines over Polygons</p>
<hr />
<div>===== What are the basics for using polygon offset? =====<br />
<br />
It's difficult to render coplanar primitives in OpenGL for two reasons:<br />
<br />
* Given two overlapping coplanar primitives with different vertices, floating point round-off errors from the two polygons can generate different depth values for overlapping pixels. With depth test enabled, some of the second polygon's pixels will pass the depth test, while some will fail.<br />
* For coplanar lines and polygons, vastly different depth values for common pixels can result. This is because depth values from polygon rasterization derive from the polygon's plane equation, while depth values from line rasterization derive from linear interpolation.<br />
<br />
Setting the depth function to GL_LEQUAL or GL_EQUAL won't resolve the problem. The visual result is referred to as stitching, bleeding, or Z fighting.<br />
<br />
Polygon offset was an extension to OpenGL 1.0, and is now incorporated into OpenGL 1.1. It allows an application to define a depth offset, which can apply to filled primitives, and under OpenGL 1.1, it can be separately enabled or disabled depending on whether the primitives are rendered in fill, line, or point mode. Thus, an application can render coplanar primitives by first rendering one primitive, then by applying an offset and rendering the second primitive.<br />
<br />
While polygon offset can alter the depth value of filled primitives in point and line mode, under no circumstances will polygon offset affect the depth values of GL_POINTS, GL_LINES, GL_LINE_STRIP, or GL_LINE_LOOP primitives. If you are trying to render point or line primitives over filled primitives, use polygon offset to push the filled primitives back. (It can't be used to pull the point and line primitives forward.)<br />
<br />
Because polygon offset alters the correct Z value calculated during rasterization, the resulting Z value, which is stored in the depth buffer will contain this offset and can adversely affect the resulting image. In many circumstances, undesirable "bleed-through" effects can result. Indeed, polygon offset may cause some primitives to pass the depth test entirely when they normally would not, or vice versa. When models intersect, polygon offset can cause an inaccurate rendering of the intersection point.<br />
<br />
===== What are the two parameters in a glPolygonOffset() call and what do they mean? =====<br />
<br />
Polygon offset allows the application to specify a depth offset with two parameters, factor and units. factor scales the maximum Z slope, with respect to X or Y of the polygon, and units scales the minimum resolvable depth buffer value. The results are summed to produce the depth offset. This offset is applied in screen space, typically with positive Z pointing into the screen.<br />
<br />
The factor parameter is required to ensure correct results for filled primitives that are nearly edge-on to the viewer. In this case, the difference between Z values for the same pixel generated by two coplanar primitives can be as great as the maximum Z slope in X or Y. This Z slope will be large for nearly edge-on primitives, and almost non-existent for face-on primitives. The factor parameter lets you add this type of variable difference into the resulting depth offset.<br />
<br />
A typical use might be to set factor and units to 1.0 to offset primitives into positive Z (into the screen) and enable polygon offset for fill mode. Two passes are then made, once with the model's solid geometry and once again with the line geometry. Nearly edge-on filled polygons are pushed substantially away from the eyepoint, to minimize interference with the line geometry, while nearly planar polygons are drawn at least one depth buffer unit behind the line geometry.<br />
<br />
===== What's the difference between the OpenGL 1.0 polygon offset extension and OpenGL 1.1 (and later) polygon offset interfaces? =====<br />
<br />
The 1.0 polygon offset extension didn't let you apply the offset to filled primitives in line or point mode. Only filled primitives in fill mode could be offset.<br />
<br />
In the 1.0 extension, a bias parameter was added to the normalized (0.0 - 1.0) depth value, in place of the 1.1 units parameter. Typical applications might obtain a good offset by specifying a bias of 0.001.<br />
<br />
See the [http://www.opengl.org/resources/faq/technical/pgonoff.c GLUT example], which renders two cylinders, one using the 1.0 polygon offset extension and the other using the 1.1 polygon offset interface.<br />
<br />
===== Why doesn't polygon offset work when I draw line primitives over filled primitives? =====<br />
<br />
Polygon offset, as its name implies, only works with polygonal primitives. It affects only the filled primitives: GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP, and GL_POLYGON. Polygon offset will work when you render them with glPolygonMode set to GL_FILL, GL_LINE, or GL_POINT.<br />
<br />
Polygon offset doesn't affect non-polygonal primitives. The GL_POINTS, GL_LINES, GL_LINE_STRIP, and GL_LINE_LOOP primitives can't be offset with glPolygonOffset().<br />
<br />
===== What other options do I have for drawing coplanar primitives when I don't want to use polygon offset? =====<br />
<br />
You can simulate the effects of polygon offset by tinkering with glDepthRange(). For example, you might code the following:<br />
<br />
<pre> glDepthRange (0.1, 1.0);<br />
/* Draw underlying geometry */<br />
glDepthRange (0.0, 0.9);<br />
/* Draw overlying geometry */</pre><br />
<br />
This code provides a fixed offset in Z, but doesn't account for the polygon slope. It's roughly equivalent to using glPolygonOffset with a factor parameter of 0.0.<br />
<br />
You can render coplanar primitives with the Stencil buffer in many creative ways. The OpenGL Programming Guide outlines one well-know method. The algorithm for drawing a polygon and its outline is as follows:<br />
<br />
# Draw the outline into the color, depth, and stencil buffers.<br />
# Draw the filled primitive into the color buffer and depth buffer, but only where the stencil buffer is clear.<br />
# Mask off the color and depth buffers, and render the outline to clear the stencil buffer.<br />
<br />
On some SGI OpenGL platforms, an application can use the SGIX_reference_plane extension. With this extension, the user specifies a plane equation in object coordinates corresponding to a set of coplanar primitives. You can enable or disable the plane. When the plane is enabled, all fragment Z values will derive from the specified plane equation. Thus, for any given fragment XY location, the depth value is guaranteed to be identical regardless of which primitive rendered it.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Drawing_Lines_over_Polygons_and_Using_Polygon_Offset&diff=1525Drawing Lines over Polygons and Using Polygon Offset2006-04-30T15:47:18Z<p>Marco: Drawing Lines over Polygons and Using Polygon Offset moved to Drawing Lines over Polygons</p>
<hr />
<div>#redirect [[Drawing Lines over Polygons]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Depth_Buffer&diff=1522FAQ/Depth Buffer2006-04-30T15:46:39Z<p>Marco: The Depth Buffer moved to Depth Buffer</p>
<hr />
<div>===== How do I make depth buffering work? =====<br />
<br />
Your application needs to do at least the following to get depth buffering to work:<br />
<br />
# Ask for a depth buffer when you create your window.<br />
# Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.<br />
# Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.<br />
# Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT. <br />
<br />
There are a number of OpenGL example programs available on the Web, which use depth buffering. If you're having trouble getting depth buffering to work correctly, you might benefit from looking at an example program to see what is done differently. This FAQ contains [http://www.opengl.org/resources/faq/technical/gettingstarted.htm#gett0002 links to several web sites that have example OpenGL code].<br />
<br />
===== Depth buffering doesn't work in my perspective rendering. What's going on? =====<br />
<br />
Make sure the zNear and zFar clipping planes are specified correctly in your calls to glFrustum() or gluPerspective().<br />
<br />
A mistake many programmers make is to specify a zNear clipping plane value of 0.0 or a negative value which isn't allowed. Both the zNear and zFar clipping planes are positive (not zero or negative) values that represent distances in front of the eye.<br />
<br />
Specifying a zNear clipping plane value of 0.0 to gluPerspective() won't generate an OpenGL error, but it might cause depth buffering to act as if it's disabled. A negative zNear or zFar clipping plane value would produce undesirable results.<br />
<br />
A zNear or zFar clipping plane value of zero or negative, when passed to glFrustum(), will produce an error that you can retrieve by calling glGetError(). The function will then act as a no-op.<br />
<br />
===== How do I write a previously stored depth image to the depth buffer? =====<br />
<br />
Use the glDrawPixels() command, with the format parameter set to GL_DEPTH_COMPONENT. You may want to mask off the color buffer when you do this, with a call to glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); .<br />
<br />
===== Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on? =====<br />
<br />
You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically. Moving the zFar clipping plane further away from the eye always has a negative impact on depth buffer precision, but it's not one as dramatic as moving the zNear clipping plane.<br />
<br />
The OpenGL Reference Manual description for glFrustum() relates depth precision to the zNear and zFar clipping planes by saying that roughly log2(zFar/zNear) bits of precision are lost. Clearly, as zNear approaches zero, this equation approaches infinity.<br />
<br />
While the blue book description is good at pointing out the relationship, it's somewhat inaccurate. As the ratio (zFar/zNear) increases, less precision is available near the back of the depth buffer and more precision is available close to the front of the depth buffer. So primitives are more likely to interact in Z if they are further from the viewer.<br />
<br />
It's possible that you simply don't have enough precision in your depth buffer to render your scene. See the last question in this section for more info.<br />
<br />
It's also possible that you are drawing coplanar primitives. Round-off errors or differences in rasterization typically create "Z fighting" for coplanar primitives. Here are some [http://www.opengl.org/resources/faq/technical/polygonoffset.htm options to assist you when rendering coplanar primitives].<br />
<br />
===== Why is my depth buffer precision so poor? =====<br />
<br />
The depth buffer precision in eye coordinates is strongly affected by the ratio of zFar to zNear, the zFar clipping plane, and how far an object is from the zNear clipping plane.<br />
<br />
You need to do whatever you can to push the zNear clipping plane out and pull the zFar plane in as much as possible.<br />
<br />
To be more specific, consider the transformation of depth from eye coordinates<br />
<br />
x<sub>e</sub>, y<sub>e</sub>, z<sub>e</sub>, w<sub>e</sub><br />
<br />
to window coordinates<br />
<br />
x<sub>w</sub>, y<sub>w</sub>, z<sub>w</sub><br />
<br />
with a perspective projection matrix specified by<br />
<br />
glFrustum(l, r, b, t, n, f);<br />
<br />
and assume the default viewport transform. The clip coordinates of zc and wc are<br />
<br />
z<sub>c</sub> = -z<sub>e</sub>* (f+n)/(f-n) - w<sub>e</sub>* 2*f*n/(f-n)<br />
w<sub>c</sub> = -z<sub>e</sub><br />
<br />
Why the negations? OpenGL wants to present to the programmer a right-handed coordinate system before projection and left-handed coordinate system after projection.<br />
<br />
and the ndc coordinate:<br />
<br />
z<sub>ndc</sub> =&nbsp;z<sub>c</sub> / w<sub>c</sub> = [ -z<sub>e</sub> * (f+n)/(f-n) - w<sub>e</sub> * 2*f*n/(f-n) ] / -z<sub>e</sub><br />
= (f+n)/(f-n) + (w<sub>e</sub> / z<sub>e</sub>) * 2*f*n/(f-n)<br />
<br />
The viewport transformation scales and offsets by the depth range (Assume it to be [0, 1]) and then scales by s = (2n-1) where n is the bit depth of the depth buffer:<br />
<br />
z<sub>w</sub> = s * [ (w<sub>e</sub> / z<sub>e</sub>) * f*n/(f-n) + 0.5 * (f+n)/(f-n) + 0.5 ]<br />
<br />
Let's rearrange this equation to express ze / we as a function of zw<br />
<br />
z<sub>e</sub> / w<sub>e</sub> = f*n/(f-n) / ((z<sub>w</sub> / s) - 0.5 * (f+n)/(f-n) - 0.5)<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - 0.5 * (f+n) - 0.5 * (f-n))<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - f) [*]<br />
<br />
Now let's look at two points, the zNear clipping plane and the zFar clipping plane:<br />
<br />
z<sub>w</sub> = 0&nbsp;=&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (-f) = -n<br />
z<sub>w</sub> = s =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((f-n) - f) = -f<br />
<br />
In a fixed-point depth buffer, zw is quantized to integers. The next representable z buffer depth away from the clip planes are 1 and s-1:<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((1/s) * (f-n) - f)<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (((s-1)/s) * (f-n) - f)<br />
<br />
Now let's plug in some numbers, for example, n = 0.01, f = 1000 and s = 65535 (i.e., a 16-bit depth buffer)<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -0.01000015<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -395.90054<br />
<br />
Think about this last line. Everything at eye coordinate depths from -395.9 to -1000 has to map into either 65534 or 65535 in the z buffer. Almost two thirds of the distance between the zNear and zFar clipping planes will have one of two z-buffer values!<br />
<br />
To further analyze the z-buffer resolution, let's take the derivative of [*] with respect to zw<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * n * (f-n) * (1/s) / ((z<sub>w</sub> / s) * (f-n) - f)<sup>2</sup><br />
<br />
Now evaluate it at zw = s<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * (f-n) * (1/s) / n<br />
= - f * (f/n-1) / s [**]<br />
<br />
If you want your depth buffer to be useful near the zFar clipping plane, you need to keep this value to less than the size of your objects in eye space (for most practical uses, world space).<br />
<br />
===== How do I turn off the zNear clipping plane? =====<br />
<br />
See [http://www.opengl.org/resources/faq/technical/clipping.htm#0050 this question] in the Clipping section.<br />
<br />
===== Why is there more precision at the front of the depth buffer? =====<br />
<br />
After the projection matrix transforms the clip coordinates, the XYZ-vertex values are divided by their clip coordinate W value, which results in normalized device coordinates. This step is known as the perspective divide. The clip coordinate W value represents the distance from the eye. As the distance from the eye increases, 1/W approaches 0. Therefore, X/W and Y/W also approach zero, causing the rendered primitives to occupy less screen space and appear smaller. This is how computers simulate a perspective view.<br />
<br />
As in reality, motion toward or away from the eye has a less profound effect for objects that are already in the distance. For example, if you move six inches closer to the computer screen in front of your face, it's apparent size should increase quite dramatically. On the other hand, if the computer screen were already 20 feet away from you, moving six inches closer would have little noticeable impact on its apparent size. The perspective divide takes this into account.<br />
<br />
As part of the perspective divide, Z is also divided by W with the same results. For objects that are already close to the back of the view volume, a change in distance of one coordinate unit has less impact on Z/W than if the object is near the front of the view volume. To put it another way, an object coordinate Z unit occupies a larger slice of NDC-depth space close to the front of the view volume than it does near the back of the view volume.<br />
<br />
In summary, the perspective divide, by its nature, causes more Z precision close to the front of the view volume than near the back.<br />
<br />
A previous question in this section contains related information.<br />
<br />
===== There is no way that a standard-sized depth buffer will have enough precision for my astronomically large scene. What are my options? =====<br />
<br />
The typical approach is to use a multipass technique. The application might divide the geometry database into regions that don't interfere with each other in Z. The geometry in each region is then rendered, starting at the furthest region, with a clear of the depth buffer before each region is rendered. This way the precision of the entire depth buffer is made available to each region.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=General_OpenGL&diff=1521General OpenGL2006-04-30T15:42:39Z<p>Marco: </p>
<hr />
<div>This section explains the basics of the OpenGL API and answers some of the most frequently asked questions about it.<br />
<br />
* [[Viewing andTransforms]]<br />
* [[Clipping, Culling, and Visibility Testing]]<br />
* [[Color]]<br />
* [[The Depth Buffer]]<br />
* [[Texture Mapping]]<br />
* [[Drawing Lines over Polygons and Using Polygon Offset]]<br />
* [[Rasterization and Operations on the Framebuffer]]<br />
* [[Transparency, Translucency, and Using Blending]]<br />
* [[Display Lists and Vertex Arrays]]<br />
* [[Using Fonts]]<br />
* [[Lights and Shadows]]<br />
* [[Curves, Surfaces, and Using Evaluators]]<br />
* [[Picking and Using Selection]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=General_OpenGL&diff=1520General OpenGL2006-04-30T15:42:15Z<p>Marco: clear layout</p>
<hr />
<div>This section explains the basics of the OpenGL API and answers some of the most frequently asked questions about it.<br />
<br />
* [[Using Viewing and Camera Transforms, and gluLookAt()]]<br />
* [[Transformations]]<br />
* [[Clipping, Culling, and Visibility Testing]]<br />
* [[Color]]<br />
* [[The Depth Buffer]]<br />
* [[Texture Mapping]]<br />
* [[Drawing Lines over Polygons and Using Polygon Offset]]<br />
* [[Rasterization and Operations on the Framebuffer]]<br />
* [[Transparency, Translucency, and Using Blending]]<br />
* [[Display Lists and Vertex Arrays]]<br />
* [[Using Fonts]]<br />
* [[Lights and Shadows]]<br />
* [[Curves, Surfaces, and Using Evaluators]]<br />
* [[Picking and Using Selection]]</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Drawing_Lines_over_Polygons&diff=1518Drawing Lines over Polygons2006-04-30T15:41:20Z<p>Marco: General OpenGL: Drawing Lines over Polygons and Using Polygon Offset moved to Drawing Lines over Polygons and Using Polygon Offset</p>
<hr />
<div>===== What are the basics for using polygon offset? =====<br />
<br />
It's difficult to render coplanar primitives in OpenGL for two reasons:<br />
<br />
* Given two overlapping coplanar primitives with different vertices, floating point round-off errors from the two polygons can generate different depth values for overlapping pixels. With depth test enabled, some of the second polygon's pixels will pass the depth test, while some will fail.<br />
* For coplanar lines and polygons, vastly different depth values for common pixels can result. This is because depth values from polygon rasterization derive from the polygon's plane equation, while depth values from line rasterization derive from linear interpolation.<br />
<br />
Setting the depth function to GL_LEQUAL or GL_EQUAL won't resolve the problem. The visual result is referred to as stitching, bleeding, or Z fighting.<br />
<br />
Polygon offset was an extension to OpenGL 1.0, and is now incorporated into OpenGL 1.1. It allows an application to define a depth offset, which can apply to filled primitives, and under OpenGL 1.1, it can be separately enabled or disabled depending on whether the primitives are rendered in fill, line, or point mode. Thus, an application can render coplanar primitives by first rendering one primitive, then by applying an offset and rendering the second primitive.<br />
<br />
While polygon offset can alter the depth value of filled primitives in point and line mode, under no circumstances will polygon offset affect the depth values of GL_POINTS, GL_LINES, GL_LINE_STRIP, or GL_LINE_LOOP primitives. If you are trying to render point or line primitives over filled primitives, use polygon offset to push the filled primitives back. (It can't be used to pull the point and line primitives forward.)<br />
<br />
Because polygon offset alters the correct Z value calculated during rasterization, the resulting Z value, which is stored in the depth buffer will contain this offset and can adversely affect the resulting image. In many circumstances, undesirable "bleed-through" effects can result. Indeed, polygon offset may cause some primitives to pass the depth test entirely when they normally would not, or vice versa. When models intersect, polygon offset can cause an inaccurate rendering of the intersection point.<br />
<br />
===== What are the two parameters in a glPolygonOffset() call and what do they mean? =====<br />
<br />
Polygon offset allows the application to specify a depth offset with two parameters, factor and units. factor scales the maximum Z slope, with respect to X or Y of the polygon, and units scales the minimum resolvable depth buffer value. The results are summed to produce the depth offset. This offset is applied in screen space, typically with positive Z pointing into the screen.<br />
<br />
The factor parameter is required to ensure correct results for filled primitives that are nearly edge-on to the viewer. In this case, the difference between Z values for the same pixel generated by two coplanar primitives can be as great as the maximum Z slope in X or Y. This Z slope will be large for nearly edge-on primitives, and almost non-existent for face-on primitives. The factor parameter lets you add this type of variable difference into the resulting depth offset.<br />
<br />
A typical use might be to set factor and units to 1.0 to offset primitives into positive Z (into the screen) and enable polygon offset for fill mode. Two passes are then made, once with the model's solid geometry and once again with the line geometry. Nearly edge-on filled polygons are pushed substantially away from the eyepoint, to minimize interference with the line geometry, while nearly planar polygons are drawn at least one depth buffer unit behind the line geometry.<br />
<br />
===== What's the difference between the OpenGL 1.0 polygon offset extension and OpenGL 1.1 (and later) polygon offset interfaces? =====<br />
<br />
The 1.0 polygon offset extension didn't let you apply the offset to filled primitives in line or point mode. Only filled primitives in fill mode could be offset.<br />
<br />
In the 1.0 extension, a bias parameter was added to the normalized (0.0 - 1.0) depth value, in place of the 1.1 units parameter. Typical applications might obtain a good offset by specifying a bias of 0.001.<br />
<br />
See the [http://www.opengl.org/resources/faq/technical/pgonoff.c GLUT example], which renders two cylinders, one using the 1.0 polygon offset extension and the other using the 1.1 polygon offset interface.<br />
<br />
===== Why doesn't polygon offset work when I draw line primitives over filled primitives? =====<br />
<br />
Polygon offset, as its name implies, only works with polygonal primitives. It affects only the filled primitives: GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP, and GL_POLYGON. Polygon offset will work when you render them with glPolygonMode set to GL_FILL, GL_LINE, or GL_POINT.<br />
<br />
Polygon offset doesn't affect non-polygonal primitives. The GL_POINTS, GL_LINES, GL_LINE_STRIP, and GL_LINE_LOOP primitives can't be offset with glPolygonOffset().<br />
<br />
===== What other options do I have for drawing coplanar primitives when I don't want to use polygon offset? =====<br />
<br />
You can simulate the effects of polygon offset by tinkering with glDepthRange(). For example, you might code the following:<br />
<br />
<pre> glDepthRange (0.1, 1.0);<br />
/* Draw underlying geometry */<br />
glDepthRange (0.0, 0.9);<br />
/* Draw overlying geometry */</pre><br />
<br />
This code provides a fixed offset in Z, but doesn't account for the polygon slope. It's roughly equivalent to using glPolygonOffset with a factor parameter of 0.0.<br />
<br />
You can render coplanar primitives with the Stencil buffer in many creative ways. The OpenGL Programming Guide outlines one well-know method. The algorithm for drawing a polygon and its outline is as follows:<br />
<br />
# Draw the outline into the color, depth, and stencil buffers.<br />
# Draw the filled primitive into the color buffer and depth buffer, but only where the stencil buffer is clear.<br />
# Mask off the color and depth buffers, and render the outline to clear the stencil buffer.<br />
<br />
On some SGI OpenGL platforms, an application can use the SGIX_reference_plane extension. With this extension, the user specifies a plane equation in object coordinates corresponding to a set of coplanar primitives. You can enable or disable the plane. When the plane is enabled, all fragment Z values will derive from the specified plane equation. Thus, for any given fragment XY location, the depth value is guaranteed to be identical regardless of which primitive rendered it.</div>Marcohttps://www.khronos.org/opengl/wiki_opengl/index.php?title=General_OpenGL:_Drawing_Lines_over_Polygons_and_Using_Polygon_Offset&diff=1519General OpenGL: Drawing Lines over Polygons and Using Polygon Offset2006-04-30T15:41:20Z<p>Marco: General OpenGL: Drawing Lines over Polygons and Using Polygon Offset moved to Drawing Lines over Polygons and Using Polygon Offset</p>
<hr />
<div>#redirect [[Drawing Lines over Polygons and Using Polygon Offset]]</div>Marco