https://www.khronos.org/opengl/wiki_opengl/api.php?action=feedcontributions&user=Tom&feedformat=atomOpenGL Wiki - User contributions [en]2022-01-24T14:45:05ZUser contributionsMediaWiki 1.35.5https://www.khronos.org/opengl/wiki_opengl/index.php?title=Shading_languages&diff=1299Shading languages2005-11-14T23:54:56Z<p>Tom: </p>
<hr />
<div>==== [[Shading languages: vendor-specific assembly-level]] ====<br />
<br />
This section discusses the various vendor-specific shading languages.<br />
<br />
==== [[Shading languages: ARB assembly-level]] ====<br />
<br />
This section discusses ARB_fragment_program and ARB_vertex_program.<br />
<br />
==== [[Shading languages: GLSL]] ====<br />
<br />
This section discusses the OpenGL Shading Language, or GLSL.<br />
<br />
==== [[Shading languages: Cg]] ====<br />
<br />
This section discusses NVidia's Cg language.<br />
<br />
==== [[Shading languages: Which shading language should I use?]] ====<br />
<br />
This section looks at each shading language's pros and cons, to help you decide which one is right for your project.</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Platform_Specific&diff=1298Platform Specific2005-11-14T23:42:25Z<p>Tom: </p>
<hr />
<div>This section describes all sorts of OS specific issues that OpenGL developers may run in to. The section is further split up per platform:<br />
<br />
* [[Platform specifics: Windows]]<br />
* [[Platform specifics: Linux]]<br />
* [[Platform specifics: MacOS]]</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Depth_Buffer&diff=1297FAQ/Depth Buffer2005-11-14T23:39:01Z<p>Tom: </p>
<hr />
<div>===== How do I make depth buffering work? =====<br />
<br />
Your application needs to do at least the following to get depth buffering to work:<br />
<br />
# Ask for a depth buffer when you create your window.<br />
# Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.<br />
# Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.<br />
# Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT. <br />
<br />
There are a number of OpenGL example programs available on the Web, which use depth buffering. If you're having trouble getting depth buffering to work correctly, you might benefit from looking at an example program to see what is done differently. This FAQ contains [http://www.opengl.org/resources/faq/technical/gettingstarted.htm#gett0002 links to several web sites that have example OpenGL code].<br />
<br />
===== Depth buffering doesn't work in my perspective rendering. What's going on? =====<br />
<br />
Make sure the zNear and zFar clipping planes are specified correctly in your calls to glFrustum() or gluPerspective().<br />
<br />
A mistake many programmers make is to specify a zNear clipping plane value of 0.0 or a negative value which isn't allowed. Both the zNear and zFar clipping planes are positive (not zero or negative) values that represent distances in front of the eye.<br />
<br />
Specifying a zNear clipping plane value of 0.0 to gluPerspective() won't generate an OpenGL error, but it might cause depth buffering to act as if it's disabled. A negative zNear or zFar clipping plane value would produce undesirable results.<br />
<br />
A zNear or zFar clipping plane value of zero or negative, when passed to glFrustum(), will produce an error that you can retrieve by calling glGetError(). The function will then act as a no-op.<br />
<br />
===== How do I write a previously stored depth image to the depth buffer? =====<br />
<br />
Use the glDrawPixels() command, with the format parameter set to GL_DEPTH_COMPONENT. You may want to mask off the color buffer when you do this, with a call to glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); .<br />
<br />
===== Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on? =====<br />
<br />
You may have configured your zNear and zFar clipping planes in a way that severely limits your depth buffer precision. Generally, this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically. Moving the zFar clipping plane further away from the eye always has a negative impact on depth buffer precision, but it's not one as dramatic as moving the zNear clipping plane.<br />
<br />
The OpenGL Reference Manual description for glFrustum() relates depth precision to the zNear and zFar clipping planes by saying that roughly log2(zFar/zNear) bits of precision are lost. Clearly, as zNear approaches zero, this equation approaches infinity.<br />
<br />
While the blue book description is good at pointing out the relationship, it's somewhat inaccurate. As the ratio (zFar/zNear) increases, less precision is available near the back of the depth buffer and more precision is available close to the front of the depth buffer. So primitives are more likely to interact in Z if they are further from the viewer.<br />
<br />
It's possible that you simply don't have enough precision in your depth buffer to render your scene. See the last question in this section for more info.<br />
<br />
It's also possible that you are drawing coplanar primitives. Round-off errors or differences in rasterization typically create "Z fighting" for coplanar primitives. Here are some [http://www.opengl.org/resources/faq/technical/polygonoffset.htm options to assist you when rendering coplanar primitives].<br />
<br />
===== Why is my depth buffer precision so poor? =====<br />
<br />
The depth buffer precision in eye coordinates is strongly affected by the ratio of zFar to zNear, the zFar clipping plane, and how far an object is from the zNear clipping plane.<br />
<br />
You need to do whatever you can to push the zNear clipping plane out and pull the zFar plane in as much as possible.<br />
<br />
To be more specific, consider the transformation of depth from eye coordinates<br />
<br />
x<sub>e</sub>, y<sub>e</sub>, z<sub>e</sub>, w<sub>e</sub><br />
<br />
to window coordinates<br />
<br />
x<sub>w</sub>, y<sub>w</sub>, z<sub>w</sub><br />
<br />
with a perspective projection matrix specified by<br />
<br />
glFrustum(l, r, b, t, n, f);<br />
<br />
and assume the default viewport transform. The clip coordinates of zc and wc are<br />
<br />
z<sub>c</sub> = -z<sub>e</sub>* (f+n)/(f-n) - w<sub>e</sub>* 2*f*n/(f-n)<br />
w<sub>c</sub> = -z<sub>e</sub><br />
<br />
Why the negations? OpenGL wants to present to the programmer a right-handed coordinate system before projection and left-handed coordinate system after projection.<br />
<br />
and the ndc coordinate:<br />
<br />
z<sub>ndc</sub> =&nbsp;z<sub>c</sub> / w<sub>c</sub> = [ -z<sub>e</sub> * (f+n)/(f-n) - w<sub>e</sub> * 2*f*n/(f-n) ] / -z<sub>e</sub><br />
= (f+n)/(f-n) + (w<sub>e</sub> / z<sub>e</sub>) * 2*f*n/(f-n)<br />
<br />
The viewport transformation scales and offsets by the depth range (Assume it to be [0, 1]) and then scales by s = (2n-1) where n is the bit depth of the depth buffer:<br />
<br />
z<sub>w</sub> = s * [ (w<sub>e</sub> / z<sub>e</sub>) * f*n/(f-n) + 0.5 * (f+n)/(f-n) + 0.5 ]<br />
<br />
Let's rearrange this equation to express ze / we as a function of zw<br />
<br />
z<sub>e</sub> / w<sub>e</sub> = f*n/(f-n) / ((z<sub>w</sub> / s) - 0.5 * (f+n)/(f-n) - 0.5)<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - 0.5 * (f+n) - 0.5 * (f-n))<br />
= f * n / ((z<sub>w</sub> / s) * (f-n) - f) [*]<br />
<br />
Now let's look at two points, the zNear clipping plane and the zFar clipping plane:<br />
<br />
z<sub>w</sub> = 0&nbsp;=&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (-f) = -n<br />
z<sub>w</sub> = s =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((f-n) - f) = -f<br />
<br />
In a fixed-point depth buffer, zw is quantized to integers. The next representable z buffer depth away from the clip planes are 1 and s-1:<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / ((1/s) * (f-n) - f)<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = f * n / (((s-1)/s) * (f-n) - f)<br />
<br />
Now let's plug in some numbers, for example, n = 0.01, f = 1000 and s = 65535 (i.e., a 16-bit depth buffer)<br />
<br />
z<sub>w</sub> = 1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -0.01000015<br />
z<sub>w</sub> = s-1 =&gt; z<sub>e</sub> / w<sub>e</sub> = -395.90054<br />
<br />
Think about this last line. Everything at eye coordinate depths from -395.9 to -1000 has to map into either 65534 or 65535 in the z buffer. Almost two thirds of the distance between the zNear and zFar clipping planes will have one of two z-buffer values!<br />
<br />
To further analyze the z-buffer resolution, let's take the derivative of [*] with respect to zw<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * n * (f-n) * (1/s) / ((z<sub>w</sub> / s) * (f-n) - f)<sup>2</sup><br />
<br />
Now evaluate it at zw = s<br />
<br />
d (z<sub>e</sub> / w<sub>e</sub>) / d z<sub>w</sub> = - f * (f-n) * (1/s) / n<br />
= - f * (f/n-1) / s [**]<br />
<br />
If you want your depth buffer to be useful near the zFar clipping plane, you need to keep this value to less than the size of your objects in eye space (for most practical uses, world space).<br />
<br />
===== How do I turn off the zNear clipping plane? =====<br />
<br />
See [http://www.opengl.org/resources/faq/technical/clipping.htm#0050 this question] in the Clipping section.<br />
<br />
===== Why is there more precision at the front of the depth buffer? =====<br />
<br />
After the projection matrix transforms the clip coordinates, the XYZ-vertex values are divided by their clip coordinate W value, which results in normalized device coordinates. This step is known as the perspective divide. The clip coordinate W value represents the distance from the eye. As the distance from the eye increases, 1/W approaches 0. Therefore, X/W and Y/W also approach zero, causing the rendered primitives to occupy less screen space and appear smaller. This is how computers simulate a perspective view.<br />
<br />
As in reality, motion toward or away from the eye has a less profound effect for objects that are already in the distance. For example, if you move six inches closer to the computer screen in front of your face, it's apparent size should increase quite dramatically. On the other hand, if the computer screen were already 20 feet away from you, moving six inches closer would have little noticeable impact on its apparent size. The perspective divide takes this into account.<br />
<br />
As part of the perspective divide, Z is also divided by W with the same results. For objects that are already close to the back of the view volume, a change in distance of one coordinate unit has less impact on Z/W than if the object is near the front of the view volume. To put it another way, an object coordinate Z unit occupies a larger slice of NDC-depth space close to the front of the view volume than it does near the back of the view volume.<br />
<br />
In summary, the perspective divide, by its nature, causes more Z precision close to the front of the view volume than near the back.<br />
<br />
A previous question in this section contains related information.<br />
<br />
===== There is no way that a standard-sized depth buffer will have enough precision for my astronomically large scene. What are my options? =====<br />
<br />
The typical approach is to use a multipass technique. The application might divide the geometry database into regions that don't interfere with each other in Z. The geometry in each region is then rendered, starting at the furthest region, with a clear of the depth buffer before each region is rendered. This way the precision of the entire depth buffer is made available to each region.</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Clipping,_Culling,_and_Visibility_Testing&diff=1296FAQ/Clipping, Culling, and Visibility Testing2005-11-14T23:19:04Z<p>Tom: </p>
<hr />
<div>===== How do I tell if a vertex has been clipped or not? =====<br />
<br />
You can use the OpenGL Feedback feature to determine if a vertex will be clipped or not. After you're in Feedback mode, simply send the vertex in question as a GL_POINTS primitive. Then switch back to GL_RENDER mode and check the size of the Feedback buffer. A size of zero indicates a clipped vertex.<br />
<br />
Typically, OpenGL implementations don't provide a fast feedback mechanism. It might be faster to perform the clip test manually. To do so, construct six plane equations corresponding to the clip-coordinate view volume and transform them into object space by the current ModelView matrix. A point is clipped if it violates any of the six plane equations.<br />
<br />
Here's a [http://www.opengl.org/resources/faq/technical/viewcull.c GLUT example] that shows how to calculate the object-space view-volume planes and clip test bounding boxes against them.<br />
<br />
Here is a tutorial titled [http://www.markmorley.com/opengl/frustumculling.html Frustum Culling in OpenGL].<br />
<br />
===== How do I perform occlusion or visibility testing? =====<br />
<br />
OpenGL provides no direct support for determining whether a given primitive will be visible in a scene for a given viewpoint. At worst, an application will need to perform these tests manually. The previous question contains information on how to do this.<br />
<br />
The code example from question 10.010 was combined with Nate Robins' excellent viewing tutorial to produce this [http://lynx.inertiagames.com/~michael/OPENGLTUTORS.zip view culling example code].<br />
<br />
Higher-level APIs, such as Fahernheit Large Model, may provide this feature.<br />
<br />
HP OpenGL platforms support an Occlusion Culling extension. To use this extension, enable the occlusion test, render some bounding geometry, and call glGetBooleanv() to obtain the visibility status of the geometry.<br />
<br />
===== How do I render to a nonrectangular viewport? =====<br />
<br />
OpenGL's stencil buffer can be used to mask the area outside of a non-rectangular viewport. With stencil enabled and stencil test appropriately set, rendering can then occur in the unmasked area. Typically an application will write the stencil mask once, and then render repeated frames into the unmasked area.<br />
<br />
As with the depth buffer, an application must ask for a stencil buffer when the window and context are created.<br />
<br />
An application will perform such a rendering as follows:<br />
<br />
<pre> /* Enable stencil test and leave it enabled throughout */<br />
glEnable (GL_STENCIL_TEST);<br />
<br />
/* Prepare to write a single bit into the stencil buffer in the area outside the viewport */<br />
glStencilFunc (GL_ALWAYS, 0x1, 0x1);<br />
<br />
/* Render a set of geometry corresponding to the area outside the viewport here */<br />
<br />
/* The stencil buffer now has a single bit painted in the area outside the viewport */<br />
<br />
/* Prepare to render the scene in the viewport */<br />
glStencilFunc (GL_EQUAL, 0x0, 0x1);<br />
<br />
/* Render the scene inside the viewport here */<br />
<br />
/* ...render the scene again as needed for animation purposes */</pre><br />
<br />
After a single bit is painted in the area outside the viewport, an application may render geometry to either the area inside or outside the viewport. To render to the inside area, use glStencilFunc(GL_EQUAL,0x0,0x1), as the code above shows. To render to the area outside the viewport, use glStencilFunc(GL_EQUAL,0x1,0x1).<br />
<br />
You can obtain similar results using only the depth test. After rendering a 3D scene to a rectangular viewport, an app can clear the depth buffer and render the nonrectangular frame.<br />
<br />
===== When an OpenGL primitive moves placing one vertex outside the window, suddenly the color or texture mapping is incorrect. What's going on? =====<br />
<br />
There are two potential causes for this.<br />
<br />
When a primitive lies partially outside the window, it often crosses the view volume boundary. OpenGL must clip any primitive that crosses the view volume boundary. To clip a primitive, OpenGL must interpolate the color values, so they're correct at the new clip vertex. This interpolation is perspective correct. However, when a primitive is rasterized, the color values are often generated using linear interpolation in window space, which isn't perspective correct. The difference in generated color values means that for any given barycentric coordinate location on a filled primitive, the color values may be different depending on whether the primitive is clipped. If the color values generated during rasterization were perspective correct, this problem wouldn't exist.<br />
<br />
For some OpenGL implementations, texture coordinates generated during rasterization aren't perspective correct. However, you can usually make them perspective correct by calling glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);. Colors generated at the rasterization stage aren't perspective correct in almost every OpenGL implementation, and can't be made so. For this reason, you're more likely to encounter this problem with colors than texture coordinates.<br />
<br />
A second reason the color or texture mapping might be incorrect for a clipped primitive is because the color values or texture coordinates are nonplanar. Color values are nonplanar when the three color components at each vertex don't lie in a plane in 3D color space. 2D texture coordinates are always planar. However, in this context, the term nonplanar is used for texture coordinates that look up a texel area that isn't congruent in shape to the primitive being textured.<br />
<br />
Nonplanar colors or texture coordinates aren't a problem for triangular primitives, but the problem may occur with GL_QUADS, GL_QUAD_STRIP and GL_POLYGON primitives. When using nonplanar color values or texture coordinates, there isn't a correct way to generate new values associated with clipped vertices. Even perspective-correct interpolation can create differences between clipped and nonclipped primitives. The solution to this problem is to not use nonplanar color values and texture coordinates.<br />
<br />
===== I know my geometry is inside the view volume. How can I turn off OpenGL's view-volume clipping to maximize performance? =====<br />
<br />
Standard OpenGL doesn't provide a mechanism to disable the view-volume clipping test; thus, it will occur for every primitive you send.<br />
<br />
Some implementations of OpenGL support the GL_EXT_clip_volume_hint extension. If the extension is available, a call to glHint(GL_CLIP_VOLUME_CLIPPING_HINT_EXT,GL_FASTEST) will inform OpenGL that the geometry is entirely within the view volume and that view-volume clipping is unnecessary. Normal clipping can be resumed by setting this hint to GL_DONT_CARE. When clipping is disabled with this hint, results are undefined if geometry actually falls outside the view volume.<br />
<br />
===== When I move the viewpoint close to an object, it starts to disappear. How can I disable OpenGL's zNear clipping plane? =====<br />
<br />
You can't. If you think about it, it makes sense: What if the viewpoint is in the middle of a scene? Certainly some geometry is behind the viewer and needs to be clipped. Rendering it will produce undesirable results.<br />
<br />
For correct perspective and depth buffer calculations to occur, setting the zNear clipping plane to 0.0 is also not an option. The zNear clipping plane must be set at a positive (nonzero) distance in front of the eye.<br />
<br />
To avoid the clipping artifacts that can otherwise occur, an application must track the viewpoint location within the scene, and ensure it doesn't get too close to any geometry. You can usually do this with a simple form of collision detection. This FAQ contains more [http://www.opengl.org/resources/faq/technical/miscellaneous.htm#misc0110 information on collision detection] with OpenGL.<br />
<br />
If you're certain that your geometry doesn't intersect any of the view-volume planes, you might be able to use an extension to disable clipping. See the previous question for more information.<br />
<br />
===== How do I draw glBitmap() or glDrawPixels() primitives that have an initial glRasterPos() outside the window's left or bottom edge? =====<br />
<br />
When the raster position is set outside the window, it's often outside the view volume and subsequently marked as invalid. Rendering the glBitmap and glDrawPixels primitives won't occur with an invalid raster position. Because glBitmap/glDrawPixels produce pixels up and to the right of the raster position, it appears impossible to render this type of primitive clipped by the left and/or bottom edges of the window.<br />
<br />
However, here's an often-used trick: Set the raster position to a valid value inside the view volume. Then make the following call:<br />
<br />
<pre> glBitmap (0, 0, 0, 0, xMove, yMove, NULL);</pre><br />
<br />
This tells OpenGL to render a no-op bitmap, but move the current raster position by (xMove,yMove). Your application will supply (xMove,yMove) values that place the raster position outside the view volume. Follow this call with the glBitmap() or glDrawPixels() to do the rendering you desire.<br />
<br />
===== Why doesn't glClear() work for areas outside the scissor rectangle? =====<br />
<br />
The OpenGL Specification states that glClear() only clears the scissor rectangle when the scissor test is enabled. If you want to clear the entire window, use the code:<br />
<br />
<pre> glDisable (GL_SCISSOR_TEST);<br />
glClear (...);<br />
glEnable (GL_SCISSOR_TEST);</pre><br />
<br />
===== How does face culling work? Why doesn't it use the surface normal? =====<br />
<br />
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .<br />
<br />
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.<br />
<br />
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension results in faster rendering, because it culls faces at an earlier stage of the rendering pipeline.</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=General_OpenGL:_Transformations&diff=1295General OpenGL: Transformations2005-11-14T23:13:03Z<p>Tom: </p>
<hr />
<div>===== I can't get transformations to work. Where can I learn more about matrices? =====<br />
<br />
A thorough explanation of basic matrix math and linear algebra is beyond the scope of this FAQ. These concepts are taught in high school math classes in the United States.<br />
<br />
If you understand the basics, but just get confused (a common problem even for the experienced!), read through Steve Baker's [http://web2.airmail.net/sjbaker1/matrices_can_be_your_friends.html review of matrix concepts] and his [http://web2.airmail.net/sjbaker1/eulers_are_evil.html article on Euler angles].<br />
<br />
Delphi code for performing basic vector, matrix, and quaternion operations can be found [http://www.lischke-online.de/Graphics.html here].<br />
<br />
===== Are OpenGL matrices column-major or row-major? =====<br />
<br />
For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix.<br />
<br />
Column-major versus row-major is purely a notational convention. Note that post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. The OpenGL Specification and the OpenGL Reference Manual both use column-major notation. You can use any notation, as long as it's clearly stated.<br />
<br />
Sadly, the use of column-major format in the spec and blue book has resulted in endless confusion in the OpenGL programming community. Column-major notation suggests that matrices are not laid out in memory as a programmer would expect.<br />
<br />
A summary of Usenet postings on the subject can be found [http://research.microsoft.com/~hollasch/cgindex/math/matrix/column-vec.html here].<br />
<br />
===== What are OpenGL coordinate units? =====<br />
<br />
The short answer: Anything you want them to be.<br />
<br />
Depending on the contents of your geometry database, it may be convenient for your application to treat one OpenGL coordinate unit as being equal to one millimeter or one parsec or anything in between (or larger or smaller).<br />
<br />
OpenGL also lets you specify your geometry with coordinates of differing values. For example, you may find it convenient to model an airplane's controls in centimeters, its fuselage in meters, and a world to fly around in kilometers. OpenGL's ModelView matrix can then scale these different coordinate systems into the same eye coordinate space.<br />
<br />
It's the application's responsibility to ensure that the Projection and ModelView matrices are constructed to provide an image that keeps the viewer at an appropriate distance, with an appropriate field of view, and keeps the zNear and zFar clipping planes at an appropriate range. An application that displays molecules in micron scale, for example, would probably not want to place the viewer at a distance of 10 feet with a 60 degree field of view.<br />
<br />
===== How are coordinates transformed? What are the different coordinate spaces? =====<br />
<br />
Object Coordinates are transformed by the ModelView matrix to produce Eye Coordinates.<br />
<br />
Eye Coordinates are transformed by the Projection matrix to produce Clip Coordinates.<br />
<br />
Clip Coordinate X, Y, and Z are divided by Clip Coordinate W to produce Normalized Device Coordinates.<br />
<br />
Normalized Device Coordinates are scaled and translated by the viewport parameters to produce Window Coordinates.<br />
<br />
Object coordinates are the raw coordinates you submit to OpenGL with a call to glVertex*() or glVertexPointer(). They represent the coordinates of your object or other geometry you want to render.<br />
<br />
Many programmers use a World Coordinate system. Objects are often modeled in one coordinate system, then scaled, translated, and rotated into the world you're constructing. World Coordinates result from transforming Object Coordinates by the modelling transforms stored in the ModelView matrix. However, OpenGL has no concept of World Coordinates. World Coordinates are purely an application construct.<br />
<br />
Eye Coordinates result from transforming Object Coordinates by the ModelView matrix. The ModelView matrix contains both modelling and viewing transformations that place the viewer at the origin with the view direction aligned with the negative Z axis.<br />
<br />
Clip Coordinates result from transforming Eye Coordinates by the Projection matrix. Clip Coordinate space ranges from -Wc to Wc in all three axes, where Wc is the Clip Coordinate W value. OpenGL clips all coordinates outside this range.<br />
<br />
Perspective division performed on the Clip Coordinates produces Normalized Device Coordinates, ranging from -1 to 1 in all three axes.<br />
<br />
Window Coordinates result from scaling and translating Normalized Device Coordinates by the viewport. The parameters to glViewport() and glDepthRange() control this transformation. With the viewport, you can map the Normalized Device Coordinate cube to any location in your window and depth buffer.<br />
<br />
For more information, see the [http://www.opengl.org/resources/faq/technical/gettingstarted.htm#gett0002 OpenGL Specification], Figure 2.6.<br />
<br />
===== How do I transform only one object in my scene or give each object its own transform? =====<br />
<br />
OpenGL provides matrix stacks specifically for this purpose. In this case, use the ModelView matrix stack.<br />
<br />
A typical OpenGL application first sets the matrix mode with a call to glMatrixMode(GL_MODELVIEW) and loads a viewing transform, perhaps with a call to gluLookAt(). More information is available on [http://www.opengl.org/resources/faq/technical/viewing.htm gluLookAt()].<br />
<br />
Then the code renders each object in the scene with its own transformation by wrapping the rendering with calls to glPushMatrix() and glPopMatrix(). For example:<br />
<br />
<pre> glPushMatrix();<br />
glRotatef(90., 1., 0., 0.);<br />
gluCylinder(quad,1,1,2,36,12);<br />
glPopMatrix();</pre><br />
<br />
The above code renders a cylinder rotated 90 degrees around the X-axis. The ModelView matrix is restored to its previous value after the glPopMatrix() call. Similar call sequences can render subsequent objects in the scene.<br />
<br />
===== How do I draw 2D controls over my 3D rendering? =====<br />
<br />
The basic strategy is to set up a 2D projection for drawing controls. You can do this either on top of your 3D rendering or in overlay planes. If you do so on top of a 3D rendering, you'll need to redraw the controls at the end of every frame (immediately before swapping buffers). If you draw into the overlay planes, you only need to redraw the controls if you're updating them.<br />
<br />
To set up a 2D projection, you need to change the Projection matrix. Normally, it's convenient to set up the projection so one world coordinate unit is equal to one screen pixel, as follows:<br />
<br />
<pre> glMatrixMode (GL_PROJECTION);<br />
glLoadIdentity ();<br />
gluOrtho2D (0, windowWidth, 0, windowHeight);</pre><br />
<br />
gluOrtho2D() sets up a Z range of -1 to 1, so you need to use one of the glVertex2*() functions to ensure your geometry isn't clipped by the zNear or zFar clipping planes.<br />
<br />
Normally, the ModelView matrix is set to the identity when drawing 2D controls, though you may find it convenient to do otherwise (for example, you can draw repeated controls with interleaved translation matrices).<br />
<br />
If exact pixelization is required, you might want to put a small translation in the ModelView matrix, as shown below:<br />
<br />
<pre> glMatrixMode (GL_MODELVIEW);<br />
glLoadIdentity ();<br />
glTranslatef (0.375, 0.375, 0.);</pre><br />
<br />
If you're drawing on top of a 3D-depth buffered image, you'll need to somehow disable depth testing while drawing your 2D geometry. You can do this by calling glDisable(GL_DEPTH_TEST) or glDepthFunc (GL_ALWAYS). Depending on your application, you might also simply clear the depth buffer before starting the 2D rendering. Finally, drawing all 2D geometry with a minimum Z coordinate is also a solution.<br />
<br />
After the 2D projection is established as above, you can render normal OpenGL primitives to the screen, specifying their coordinates with XY pixel addresses (using OpenGL-centric screen coordinates, with (0,0) in the lower left).<br />
<br />
===== How do I bypass OpenGL matrix transformations and send 2D coordinates directly for rasterization? =====<br />
<br />
There isn't a mode switch to disable OpenGL matrix transformations. However, if you set either or both matrices to the identity with a glLoadIdentity() call, typical OpenGL implementations are intelligent enough to know that an identity transformation is a no-op and will act accordingly.<br />
<br />
More detailed information on using OpenGL as a rasterization-only API is in the [http://www.3dgamedev.com/resources/openglfaq.txt OpenGL Game Developer’s FAQ].<br />
<br />
===== What are the pros and cons of using absolute versus relative coordinates? =====<br />
<br />
Some OpenGL applications may need to render the same object in multiple locations in a single scene. OpenGL lets you do this two ways:<br />
<br />
# Use “absolute coordinates". Maintain multiple copies of each object, each with its own unique set of vertices. You don't need to change the ModelView matrix to render the object at the desired location.<br />
# Use “relative coordinates". Keep only one copy of the object, and render it multiple times by pushing the ModelView matrix stack, setting the desired transform, sending the geometry, and popping the stack. Repeat these steps for each object.<br />
<br />
In general, frequent changes to state, such as to the ModelView matrix, can negatively impact your application’s performance. OpenGL can process your geometry faster if you don't wrap each individual primitive in a lot of changes to the ModelView matrix.<br />
<br />
However, sometimes you need to weigh this against the memory savings of replicating geometry. Let's say you define a doorknob with high approximation, such as 200 or 300 triangles, and you're modeling a house with 50 doors in it, all of which have the same doorknob. It's probably preferable to use a single doorknob display list, with multiple unique transform matrices, rather than use absolute coordinates with 10-15K triangles in memory.<br />
<br />
As with many computing issues, it's a trade-off between processing time and memory that you'll need to make on a case-by-case basis.<br />
<br />
===== How can I draw more than one view of the same scene? =====<br />
<br />
You can draw two views into the same window by using the glViewport() call. Set glViewport() to the area that you want the first view, set your scene’s view, and render. Then set glViewport() to the area for the second view, again set your scene’s view, and render.<br />
<br />
You need to be aware that some operations don't pay attention to the glViewport, such as SwapBuffers and glClear(). SwapBuffers always swaps the entire window. However, you can restrain glClear() to a rectangular window by using the scissor rectangle.<br />
<br />
Your application might only allow different views in separate windows. If so, you need to perform a MakeCurrent operation between the two renderings. If the two windows share a context, you need to change the scene’s view as described above. This might not be necessary if your application uses separate contexts for each window.<br />
<br />
===== How do I transform my objects around a fixed coordinate system rather than the object's local coordinate system? =====<br />
<br />
If you rotate an object around its Y-axis, you'll find that the X- and Z-axes rotate with the object. A subsequent rotation around one of these axes rotates around the newly transformed axis and not the original axis. It's often desirable to perform transformations in a fixed coordinate system rather than the object’s local coordinate system.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt OpenGL Game Developer’s FAQ] contains information on using quaternions to store rotations, which may be useful in solving this problem.<br />
<br />
The root cause of the problem is that OpenGL matrix operations postmultiply onto the matrix stack, thus causing transformations to occur in object space. To affect screen space transformations, you need to premultiply. OpenGL doesn't provide a mode switch for the order of matrix multiplication, so you need to premultiply by hand. An application might implement this by retrieving the current matrix after each frame. The application multiplies new transformations for the next frame on top of an identity matrix and multiplies the accumulated current transformations (from the last frame) onto those transformations using glMultMatrix().<br />
<br />
You need to be aware that retrieving the ModelView matrix once per frame might have a detrimental impact on your application’s performance. However, you need to benchmark this operation, because the performance will vary from one implementation to the next.<br />
<br />
===== What are the pros and cons of using glFrustum() versus gluPerspective()? Why would I want to use one over the other? =====<br />
<br />
glFrustum() and gluPerspective() both produce perspective projection matrices that you can use to transform from eye coordinate space to clip coordinate space. The primary difference between the two is that glFrustum() is more general and allows off-axis projections, while gluPerspective() only produces symmetrical (on-axis) projections. Indeed, you can use glFrustum() to implement gluPerspective(). However, aside from the layering of function calls that is a natural part of the GLU interface, there is no performance advantage to using matrices generated by glFrustum() over gluPerspective().<br />
<br />
Since glFrustum() is more general than gluPerspective(), you can use it in cases when gluPerspective() can't be used. Some examples include [http://www.opengl.org/resources/faq/technical/lights.htm#ligh0130 projection shadows], tiled renderings, and stereo views.<br />
<br />
Tiled rendering uses multiple off-axis projections to render different sections of a scene. The results are assembled into one large image array to produce the final image. This is often necessary when the desired dimensions of the final rendering exceed the OpenGL implementation's maximum viewport size.<br />
<br />
In a stereo view, two renderings of the same scene are done with the view location slightly shifted. Since the view axis is right between the “eyes”, each view must use a slightly off-axis projection to either side to achieve correct visual results.<br />
<br />
===== How can I make a call to glFrustum() that matches my call to gluPerspective()? =====<br />
<br />
The field of view (fov) of your glFrustum() call is:<br />
<br />
fov*0.5 = arctan ((top-bottom)*0.5 / near)<br />
<br />
Since bottom == -top for the symmetrical projection that gluPerspective() produces, then:<br />
<br />
top = tan(fov*0.5) * near<br />
bottom = -top<br />
<br />
Note: fov must be in radians for the above formulae to work with the C math library. If you have comnputer your fov in degrees (as in the call to gluPerspective()), then calculate top as follows:<br />
<br />
top = tan(fov*3.14159/360.0) * near<br />
<br />
The left and right parameters are simply functions of the top, bottom, and aspect:<br />
<br />
left = aspect * bottom<br />
right = aspect * top<br />
<br />
The OpenGL Reference Manual ([http://www.opengl.org/resources/faq/technical/gettingstarted.htm#gett0004 where do I get this?]) shows the matrices produced by both functions.<br />
<br />
===== How do I draw a full-screen quad? =====<br />
<br />
This question usually means, "How do I draw a quad that fills the entire OpenGL viewport?" There are many ways to do this.<br />
<br />
The most straightforward method is to set the desired color, set both the Projection and ModelView matrices to the identity, and call glRectf() or draw an equivalent GL_QUADS primitive. Your rectangle or quad's Z value should be in the range of –1.0 to 1.0, with –1.0 mapping to the zNear clipping plane, and 1.0 to the zFar clipping plane.<br />
<br />
As an example, here's how to draw a full-screen quad at the zNear clipping plane:<br />
<br />
<pre> glMatrixMode (GL_MODELVIEW);<br />
glPushMatrix ();<br />
glLoadIdentity ();<br />
glMatrixMode (GL_PROJECTION);<br />
glPushMatrix ();<br />
glLoadIdentity ();<br />
glBegin (GL_QUADS);<br />
glVertex3i (-1, -1, -1);<br />
glVertex3i (1, -1, -1);<br />
glVertex3i (1, 1, -1);<br />
glVertex3i (-1, 1, -1);<br />
glEnd ();<br />
glPopMatrix ();<br />
glMatrixMode (GL_MODELVIEW);<br />
glPopMatrix ();</pre><br />
<br />
Your application might want the quad to have a maximum Z value, in which case 1 should be used for the Z value instead of -1.<br />
<br />
When painting a full-screen quad, it might be useful to mask off some buffers so that only specified buffers are touched. For example, you might mask off the color buffer and set the depth function to GL_ALWAYS, so only the depth buffer is painted. Also, you can set masks to allow the stencil buffer to be set or any combination of buffers.<br />
<br />
===== How can I find the screen coordinates for a given object-space coordinate? =====<br />
<br />
You can use the GLU library gluProject() utility routine if you only need to find it for a few vertices. For a large number of coordinates, it can be more efficient to use the Feedback mechanism.<br />
<br />
To use gluProject(), you'll need to provide the ModelView matrix, projection matrix, viewport, and input object space coordinates. Screen space coordinates are returned for X, Y, and Z, with Z being normalized (0 <= Z <= 1).<br />
<br />
===== How can I find the object-space coordinates for a pixel on the screen? =====<br />
<br />
The GLU library provides the gluUnProject() function for this purpose.<br />
<br />
You'll need to read the depth buffer to obtain the input screen coordinate Z value at the X,Y location of interest. This can be coded as follows:<br />
<br />
<pre> GLdouble z;<br />
glReadPixels (x, y, 1, 1, GL_DEPTH_COMPONENT, GL_DOUBLE, &z);</pre><br />
<br />
Note that x and y are OpenGL-centric with (0,0) in the lower-left corner.<br />
<br />
You'll need to provide the screen space X, Y, and Z values as input to gluUnProject() with the ModelView matrix, Projection matrix, and viewport that were current at the time the specific pixel of interest was rendered.<br />
<br />
===== How do I find the coordinates of a vertex transformed only by the ModelView matrix? =====<br />
<br />
It's often useful to obtain the eye coordinate space value of a vertex (i.e., the object space vertex transformed by the ModelView matrix). You can obtain this by retrieving the current ModelView matrix and performing simple vector / matrix multiplication.<br />
<br />
===== How do I calculate the object-space distance from the viewer to a given point? =====<br />
<br />
Transform the point into eye-coordinate space by multiplying it by the ModelView matrix. Then simply calculate its distance from the origin. (If this doesn't work, you may have incorrectly placed the view transform on the Projection matrix stack.)<br />
<br />
To retrieve the current Modelview matrix:<br />
<br />
<pre> GLfloat m[16];<br />
glGetFloatv (GL_MODELVIEW_MATRIX, m);</pre><br />
<br />
As with any OpenGL call, you must have a context current with a window or drawable in order for glGet*() function calls to work.<br />
<br />
===== How do I keep my aspect ratio correct after a window resize? =====<br />
<br />
It depends on how you are setting your projection matrix. In any case, you'll need to know the new dimensions (width and height) of your window. How to obtain these depends on which platform you're using. In GLUT, for example, the dimensions are passed as parameters to the reshape function callback.<br />
<br />
The following assumes you're maintaining a viewport that's the same size as your window. If you are not, substitute viewportWidth and viewportHeight for windowWidth and windowHeight.<br />
<br />
If you're using gluPerspective() to set your Projection matrix, the second parameter controls the aspect ratio. When your program catches a window resize, you'll need to change your Projection matrix as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(fov, (float)windowWidth/(float)windowHeight, zNear, zFar);</pre><br />
<br />
If you're using glFrustum(), the aspect ratio varies with the width of the view volume to the height of the view volume. You might maintain a 1:1 aspect ratio with the following window resize code:<br />
<br />
<pre> float cx, halfWidth = windowWidth*0.5f;<br />
float aspect = (float)windowWidth/(float)windowHeight;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
/* cx is the eye space center of the zNear plane in X */<br />
glFrustum(cx-halfWidth*aspect, cx+halfWidth*aspect, bottom, top, zNear, zFar);</pre><br />
<br />
glOrtho() and gluOrtho2D() are similar to glFrustum().<br />
<br />
===== Can I make OpenGL use a left-handed coordinate space? =====<br />
<br />
OpenGL doesn't have a mode switch to change from right- to left-handed coordinates. However, you can easily obtain a left-handed coordinate system by multiplying a negative Z scale onto the ModelView matrix. For example:<br />
<br />
<pre> glMatrixMode (GL_MODELVIEW);<br />
glLoadIdentity ();<br />
glScalef (1., 1., -1.);<br />
/* multiply view transforms as usual... */<br />
/* multiply model transforms as usual... */</pre><br />
<br />
===== How can I transform an object so that it points at or follows another object or point in my scene? =====<br />
<br />
You need to construct a matrix that transforms from your object's local coordinate system into a coordinate system that faces in the desired direction. See this [http://www.opengl.org/resources/faq/technical/lookat.cpp example code] to see how this type of matrix is created.<br />
<br />
If you merely want to render an object so that it always faces the viewer, you might consider simply rendering it in eye-coordinate space with the ModelView matrix set to the identity.<br />
<br />
===== How can I transform an object with a given yaw, pitch, and roll? =====<br />
<br />
The upper left 3x3 portion of a transformation matrix is composed of the new X, Y, and Z axes of the post-transformation coordinate space.<br />
<br />
If the new transform is a roll, compute new local Y and X axes by rotating them "roll" degrees around the local Z axis. Do similar calculations if the transform is a pitch or yaw. Then simply construct your transformation matrix by inserting the new local X, Y, and Z axes into the upper left 3x3 portion of an identity matrix. This matrix can be passed as a parameter to glMultMatrix().<br />
<br />
Further rotations should be computed around the new local axes. This will inevitably require rotation about an arbitrary axis, which can be confusing to inexperienced 3D programmers. This is a [http://www.opengl.org/resources/faq/technical/transformations.htm#tran0001 basic concept in linear algebra].<br />
<br />
Many programmers apply all three transformations -- yaw, pitch, and roll -- at once as successive glRotate() calls about the X, Y, and Z axes. This has the disadvantage of creating gimbal lock, in which the result depends on the order of glRotate() calls.<br />
<br />
===== How do I render a mirror? =====<br />
<br />
Render your scene twice, once as it is reflected in the mirror, then once from the normal (non-reflected) view. [http://www.opengl.org/resources/faq/technical/mirror.c Example code] demonstrates this technique.<br />
<br />
For axis-aligned mirrors, such as a mirror on the YZ plane, the reflected scene can be rendered with a simple scale and translate. Scale by -1.0 in the axis corresponding to the mirror's normal, and translate by twice the mirror's distance from the origin. Rendering the scene with these transforms in place will yield the scene reflected in the mirror. Use the matrix stack to restore the view transform to its previous value.<br />
<br />
Next, clear the depth buffer with a call to glClear(GL_DEPTH_BUFFER_BIT). Then render the mirror. For a perfectly reflecting mirror, render into the depth buffer only. Real mirrors are not perfect reflectors, as they absorb some light. To create this effect, use blending to render a black mirror with an alpha of 0.05. glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA) is a good blending function for this purpose.<br />
<br />
Finally, render the non-reflected scene. Since the entire reflected scene exists in the color buffer, and not just the portion of the reflected scene in the mirror, you will need to touch all pixels to overwrite areas of the reflected scene that should not be visible.<br />
<br />
===== How can I do my own perspective scaling? =====<br />
<br />
OpenGL multiplies your coordinates by the ModelView matrix, then by the Projection matrix to get clip coordinates. It then performs the perspective divide to obtain normalized device coordinates. It's the perspective division step that creates a perspective rendering, with geometry in the distance appearing smaller than the geometry in the foreground. The perspective division stage is accomplished by dividing your XYZ clipping coordinate values by the clipping coordinate W value, such as:<br />
<br />
Xndc = Xcc/Wcc<br />
Yndc = Ycc/Wcc<br />
Zndc = Zcc/Wcc<br />
<br />
To do your own perspective correction, you need to obtain the clipping coordinate W value. The feedback buffer provides homogenous coordinates with XYZ in device coordinates and W in clip coordinates. You might also glGetFloatv(GL_CURRENT_RASTER_POSITION,…) and the W value will again be in clipping coordinates, while XYZ are in device coordinates.</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=OpenGL_Extension&diff=1294OpenGL Extension2005-11-14T22:56:45Z<p>Tom: Added some examples to each category</p>
<hr />
<div>=== Introduction to the extension mechanism ===<br />
<br />
=== Vertex submission extensions ===<br />
<br />
* [[GL_ARB_vertex_buffer_object]]<br />
* [[GL_NV_vertex_array_range]]<br />
* [[GL_EXT_compiled_vertex_array]]<br />
<br />
=== Texturing related extensions ===<br />
<br />
* [[GL_ARB_texture_env_combine]]<br />
* [[GL_ARB_texture_compression]]<br />
<br />
=== Programmability extensions ===<br />
<br />
* [[GL_ARB_vertex_program]]<br />
* [[GL_ARB_fragment_program]]<br />
<br />
=== Framebuffer related extensions ===<br />
<br />
* [[GL_ARB_draw_buffers]]<br />
* [[GL_EXT_framebuffer_object]]</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Main_Page&diff=1293Main Page2005-11-09T14:15:18Z<p>Tom: </p>
<hr />
<div>=== About this Wiki ===<br />
<br />
This Wiki is an attempt to collect answers to frequently asked questions on the OpenGL.org forums. The hope is that by using a Wiki rather than a classic FAQ page, the information contained here will be kept relevant and up to date.<br />
<br />
=== [[Getting started]] ===<br />
<br />
Discusses the things you need to know before you can get started with OpenGL. This includes how to set up OpenGL runtime libraries on your system, as well as information on setting up your development environment.<br />
<br />
=== [[General OpenGL]] ===<br />
<br />
Explains the basics of the OpenGL API and answers the most frequently asked questions about it.<br />
<br />
=== [[OpenGL extensions]] ===<br />
<br />
Introduces OpenGL's extension mechanism, and elaborates on the many extensions that are available.<br />
<br />
=== [[Shading languages]] ===<br />
<br />
Discusses the shading languages available for programmable vertex and fragment processing in OpenGL.<br />
<br />
=== [[Performance]] ===<br />
<br />
Offers various performance guidelines for OpenGL applications.<br />
<br />
=== [[Math and algorithms]] ===<br />
<br />
Offers API-agnostic discussion of 3D application design, rendering techniques, 3D maths, and other topics related to computer graphics.<br />
<br />
=== [[Platform specifics]] ===<br />
<br />
Focuses on OS-dependent issues that OpenGL applications may bump into.<br />
<br />
=== [[Hardware specifics]] ===<br />
<br />
Discusses the peculiarities of the different video cards and drivers that are out there.<br />
<br />
=== [[Related toolkits and APIs]] ===<br />
<br />
Provides an overview of various OpenGL toolkits (GLU, Glut, extension loading libraries, ...), higher-level APIs and other utility libraries.<br />
<br />
=== [[History of OpenGL]] ===<br />
<br />
TBD<br />
<br />
=== [[Glossary]] ===</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Main_Page&diff=1292Main Page2005-11-09T14:14:41Z<p>Tom: </p>
<hr />
<div>=== About this Wiki ===<br />
<br />
This Wiki is an attempt to collect answers to frequently asked questions on the OpenGL.org forums. The hope is that by using a Wiki rather than a classic FAQ page, the information contained here will be kept relevant and up to date.<br />
<br />
=== [[Getting started]] ===<br />
<br />
Discusses the things you need to know before you can get started with OpenGL. This includes how to set up OpenGL runtime libraries on your system, as well as information on setting up your development environment.<br />
<br />
=== [[General OpenGL]] ===<br />
<br />
Explains the basics of the OpenGL API and answers the most frequently asked questions about it.<br />
<br />
=== [[OpenGL extensions]] ===<br />
<br />
Introduces OpenGL's extension mechanism, and elaborates on the many extensions that are available.<br />
<br />
=== [[Shading languages]] ===<br />
<br />
Discusses the shading languages available for programmable vertex and fragment processing in OpenGL.<br />
<br />
=== [[Performance]] ===<br />
<br />
Offers various performance guidelines for OpenGL applications.<br />
<br />
=== [[Math and algorithms]] ===<br />
<br />
Offers API-agnostic discussion of 3D application design, rendering techniques, 3D maths, and other topics related to computer graphics.<br />
<br />
=== [[Platform specifics]] ===<br />
<br />
Focuses on OS-dependent issues that OpenGL applications may bump into.<br />
<br />
=== [[Hardware specifics]] ===<br />
<br />
Discusses the peculiarities of the different video cards and drivers that are out there.<br />
<br />
=== [[Related toolkits and APIs]] ===<br />
<br />
Provides an overview of various OpenGL toolkits (GLU, Glut, extension loading libraries, ...), higher-level APIs and other utility libraries.<br />
<br />
=== [[History of OpenGL]] ===<br />
<br />
TBD<br />
<br />
=== [[Glossary of terms]] ===</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Drawing_Lines_over_Polygons&diff=1291Drawing Lines over Polygons2005-11-08T19:40:38Z<p>Tom: </p>
<hr />
<div>===== What are the basics for using polygon offset? =====<br />
<br />
It's difficult to render coplanar primitives in OpenGL for two reasons:<br />
<br />
* Given two overlapping coplanar primitives with different vertices, floating point round-off errors from the two polygons can generate different depth values for overlapping pixels. With depth test enabled, some of the second polygon's pixels will pass the depth test, while some will fail.<br />
* For coplanar lines and polygons, vastly different depth values for common pixels can result. This is because depth values from polygon rasterization derive from the polygon's plane equation, while depth values from line rasterization derive from linear interpolation.<br />
<br />
Setting the depth function to GL_LEQUAL or GL_EQUAL won't resolve the problem. The visual result is referred to as stitching, bleeding, or Z fighting.<br />
<br />
Polygon offset was an extension to OpenGL 1.0, and is now incorporated into OpenGL 1.1. It allows an application to define a depth offset, which can apply to filled primitives, and under OpenGL 1.1, it can be separately enabled or disabled depending on whether the primitives are rendered in fill, line, or point mode. Thus, an application can render coplanar primitives by first rendering one primitive, then by applying an offset and rendering the second primitive.<br />
<br />
While polygon offset can alter the depth value of filled primitives in point and line mode, under no circumstances will polygon offset affect the depth values of GL_POINTS, GL_LINES, GL_LINE_STRIP, or GL_LINE_LOOP primitives. If you are trying to render point or line primitives over filled primitives, use polygon offset to push the filled primitives back. (It can't be used to pull the point and line primitives forward.)<br />
<br />
Because polygon offset alters the correct Z value calculated during rasterization, the resulting Z value, which is stored in the depth buffer will contain this offset and can adversely affect the resulting image. In many circumstances, undesirable "bleed-through" effects can result. Indeed, polygon offset may cause some primitives to pass the depth test entirely when they normally would not, or vice versa. When models intersect, polygon offset can cause an inaccurate rendering of the intersection point.<br />
<br />
===== What are the two parameters in a glPolygonOffset() call and what do they mean? =====<br />
<br />
Polygon offset allows the application to specify a depth offset with two parameters, factor and units. factor scales the maximum Z slope, with respect to X or Y of the polygon, and units scales the minimum resolvable depth buffer value. The results are summed to produce the depth offset. This offset is applied in screen space, typically with positive Z pointing into the screen.<br />
<br />
The factor parameter is required to ensure correct results for filled primitives that are nearly edge-on to the viewer. In this case, the difference between Z values for the same pixel generated by two coplanar primitives can be as great as the maximum Z slope in X or Y. This Z slope will be large for nearly edge-on primitives, and almost non-existent for face-on primitives. The factor parameter lets you add this type of variable difference into the resulting depth offset.<br />
<br />
A typical use might be to set factor and units to 1.0 to offset primitives into positive Z (into the screen) and enable polygon offset for fill mode. Two passes are then made, once with the model's solid geometry and once again with the line geometry. Nearly edge-on filled polygons are pushed substantially away from the eyepoint, to minimize interference with the line geometry, while nearly planar polygons are drawn at least one depth buffer unit behind the line geometry.<br />
<br />
===== What's the difference between the OpenGL 1.0 polygon offset extension and OpenGL 1.1 (and later) polygon offset interfaces? =====<br />
<br />
The 1.0 polygon offset extension didn't let you apply the offset to filled primitives in line or point mode. Only filled primitives in fill mode could be offset.<br />
<br />
In the 1.0 extension, a bias parameter was added to the normalized (0.0 - 1.0) depth value, in place of the 1.1 units parameter. Typical applications might obtain a good offset by specifying a bias of 0.001.<br />
<br />
See the [http://www.opengl.org/resources/faq/technical/pgonoff.c GLUT example], which renders two cylinders, one using the 1.0 polygon offset extension and the other using the 1.1 polygon offset interface.<br />
<br />
===== Why doesn't polygon offset work when I draw line primitives over filled primitives? =====<br />
<br />
Polygon offset, as its name implies, only works with polygonal primitives. It affects only the filled primitives: GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP, and GL_POLYGON. Polygon offset will work when you render them with glPolygonMode set to GL_FILL, GL_LINE, or GL_POINT.<br />
<br />
Polygon offset doesn't affect non-polygonal primitives. The GL_POINTS, GL_LINES, GL_LINE_STRIP, and GL_LINE_LOOP primitives can't be offset with glPolygonOffset().<br />
<br />
===== What other options do I have for drawing coplanar primitives when I don't want to use polygon offset? =====<br />
<br />
You can simulate the effects of polygon offset by tinkering with glDepthRange(). For example, you might code the following:<br />
<br />
<pre> glDepthRange (0.1, 1.0);<br />
/* Draw underlying geometry */<br />
glDepthRange (0.0, 0.9);<br />
/* Draw overlying geometry */</pre><br />
<br />
This code provides a fixed offset in Z, but doesn't account for the polygon slope. It's roughly equivalent to using glPolygonOffset with a factor parameter of 0.0.<br />
<br />
You can render coplanar primitives with the Stencil buffer in many creative ways. The OpenGL Programming Guide outlines one well-know method. The algorithm for drawing a polygon and its outline is as follows:<br />
<br />
# Draw the outline into the color, depth, and stencil buffers.<br />
# Draw the filled primitive into the color buffer and depth buffer, but only where the stencil buffer is clear.<br />
# Mask off the color and depth buffers, and render the outline to clear the stencil buffer.<br />
<br />
On some SGI OpenGL platforms, an application can use the SGIX_reference_plane extension. With this extension, the user specifies a plane equation in object coordinates corresponding to a set of coplanar primitives. You can enable or disable the plane. When the plane is enabled, all fragment Z values will derive from the specified plane equation. Thus, for any given fragment XY location, the depth value is guaranteed to be identical regardless of which primitive rendered it.</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=FAQ/Color&diff=1290FAQ/Color2005-11-08T19:36:01Z<p>Tom: </p>
<hr />
<div>===== My texture map colors reverse blue and red, yellow and cyan, etc. What's happening? =====<br />
<br />
Your texture image has the reverse byte ordering of what OpenGL is expecting. One way to handle this is to swap bytes within your code before passing the data to OpenGL.<br />
<br />
Under OpenGL 1.2, you may specify GL_BGR or GL_BGRA as the "format" parameter to glDrawPixels(), glGetTexImage(), glReadPixels(), glTexImage1D(), glTexImage2D(), and glTexImage3D(). In previous versions of OpenGL, this functionality might be available in the form of the EXT_bgra extension (using GL_BGR_EXT and GL_BGRA_EXT as the "format" parameter).<br />
<br />
===== How do I render a color index into an RGB window or vice versa? =====<br />
<br />
There isn't a way to do this. However, you might consider opening an RGB window with a color index overlay plane, if it works in your application.<br />
<br />
If you have an array of color indices that you want to use as a texture map, you might want to consider using GL_EXT_paletted_texture, which lets an application specify a color index texture map with a color palette.<br />
<br />
===== The colors are almost entirely missing when I render in Microsoft Windows. What's happening? =====<br />
<br />
The most probable cause is that the Windows display is set to 256 colors. To change it, you can increase the color depth by clicking the right mouse button on the desktop, then select Properties, the Settings tab, and change the number of colors in the Color Palette to a higher number.<br />
<br />
===== How do I specify an exact color for a primitive? =====<br />
<br />
First, you'll need to know the depth of the color buffer you are rendering to. For an RGB color buffer, you can obtain these values with the following code:<br />
<br />
<pre> GLint redBits, greenBits, blueBits;<br />
<br />
glGetIntegerv (GL_RED_BITS, &redBits);<br />
glGetIntegerv (GL_GREEN_BITS, &greenBits);<br />
glGetIntegerv (GL_BLUE_BITS, &blueBits);</pre><br />
<br />
If the depth value for each component is at least as large as your required color precision, you can specify an exact color for your primitives. Specify the color you want to use into the most significant bits of three unsigned integers and use glColor3ui() to specify the color.<br />
<br />
If your color buffer isn't deep enough to accurately represent the color you desire, you'll need a fallback strategy. Trimming off the least significant bits of each color component is an acceptable alternative. Again, use glColor3ui() (or glColor3us(), etc.) to specify the color with your values stored in the most significant bits of each parameter.<br />
<br />
In either event, you'll need to ensure that any state that could affect the final color has been disabled. The following code will accomplish this:<br />
<br />
<pre> glDisable (GL_BLEND);<br />
glDisable (GL_DITHER);<br />
glDisable (GL_FOG);<br />
glDisable (GL_LIGHTING);<br />
glDisable (GL_TEXTURE_1D);<br />
glDisable (GL_TEXTURE_2D);<br />
glDisable (GL_TEXTURE_3D);<br />
glShadeModel (GL_FLAT);</pre><br />
<br />
===== How do I render each primitive in a unique color? =====<br />
<br />
You need to know the depth of each component in your color buffer. The previous question contains the code to obtain these values. The depth tells you the number of unique color values you can render. For example, if you use the code from the previous question, which retrieves the color depth in redBits, greenBits, and blueBits, the number of unique colors available is 2^(redBits+greenBits+blueBits).<br />
<br />
If this number is greater than the number of primitives you want to render, there is no problem. You need to use glColor3ui() (or glColor3us(), etc) to specify each color, and store the desired color in the most significant bits of each parameter. You can code a loop to render each primitive in a unique color with the following:<br />
<br />
<pre> /*<br />
Given: numPrims is the number of primitives to render.<br />
Given void renderPrimitive(unsigned long) is a routine to render the primitive specified by the given parameter index.<br />
Given GLuint makeMask (GLint) returns a bit mask for the number of bits specified.<br />
*/<br />
<br />
GLuint redMask = makeMask(redBits) << (greenBits + blueBits);<br />
GLuint greenMask = makeMask(greenBits) << blueBits;<br />
GLuint blueMask = makeMask(blueBits);<br />
int redShift = 32 - (redBits+greenBits+blueBits);<br />
int greenShift = 32 - (greenBits+blueBits);<br />
int blueShift = 32 - blueBits;<br />
unsigned long indx;<br />
<br />
for (indx=0; indx<numPrims, indx++) {<br />
glColor3ui (indx & redMask << redShift,<br />
indx & greenMask << greenShift,<br />
indx & blueMask << blueShift);<br />
renderPrimitive (indx);<br />
}</pre><br />
<br />
Also, make sure you disable any state that could alter the final color. See the question above for a code snippet to accomplish this.<br />
<br />
If you're using this for picking instead of the ususal Selection feature, any color subsequently read back from the color buffer can easily be converted to the indx value of the primitive rendered in that color.</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1289Viewing and Transformations2005-11-08T19:31:53Z<p>Tom: restored hyperlinks</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt game dev FAQ] has good information on these two matrices.<br />
<br />
Read Steve Baker's article on [http://web2.airmail.net/sjbaker1/projection_abuse.html projection abuse]. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Look at question 3 for an explanation of this problem.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/stereogl/ 3D Stereo Rendering Using OpenGL]<br />
* [http://www.swin.edu.au/astronomy/pbourke/stereographics/stereorender/ Calculating Stereo Pairs]<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/redblue/ Creating Anaglyphs using OpenGL]</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1288Viewing and Transformations2005-11-08T19:30:17Z<p>Tom: Formatting</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The game dev FAQ has good information on these two matrices.<br />
<br />
Read Steve Baker's article on projection abuse. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Look at question 3 for an explanation of this problem.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* 3D Stereo Rendering Using OpenGL<br />
* Calculating Stereo Pairs<br />
* Creating Anaglyphs using OpenGL</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1287Viewing and Transformations2005-11-08T19:26:31Z<p>Tom: </p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The game dev FAQ has good information on these two matrices.<br />
<br />
Read Steve Baker's article on projection abuse. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}<br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);<br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */<br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
zNear = 1.0;<br />
zFar = zNear + diam;<br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}<br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);<br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);<br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Look at question 3 for an explanation of this problem.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* 3D Stereo Rendering Using OpenGL<br />
* Calculating Stereo Pairs<br />
* Creating Anaglyphs using OpenGL</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=General_OpenGL&diff=1286General OpenGL2005-11-08T19:16:20Z<p>Tom: </p>
<hr />
<div>This section explains the basics of the OpenGL API and answers some of the most frequently asked questions about it.<br />
<br />
* [[General OpenGL: Using Viewing and Camera Transforms, and gluLookAt()]]<br />
* [[General OpenGL: Transformations]]<br />
* [[General OpenGL: Clipping, Culling, and Visibility Testing]]<br />
* [[General OpenGL: Color]]<br />
* [[General OpenGL: The Depth Buffer]]<br />
* [[General OpenGL: Texture Mapping]]<br />
* [[General OpenGL: Drawing Lines over Polygons and Using Polygon Offset]]<br />
* [[General OpenGL: Rasterization and Operations on the Framebuffer]]<br />
* [[General OpenGL: Transparency, Translucency, and Using Blending]]<br />
* [[General OpenGL: Display Lists and Vertex Arrays]]<br />
* [[General OpenGL: Using Fonts]]<br />
* [[General OpenGL: Lights and Shadows]]<br />
* [[General OpenGL: Curves, Surfaces, and Using Evaluators]]<br />
* [[General OpenGL: Picking and Using Selection]]</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Hardware_Specific&diff=1285Hardware Specific2005-11-08T19:07:11Z<p>Tom: </p>
<hr />
<div>This section describes all sorts of hardware or driver specific issues that OpenGL developers may run in to. The section is further split up per graphics card manufacturer:<br />
<br />
* [[Hardware specifics: 3Dlabs]]<br />
* [[Hardware specifics: ATI]]<br />
* [[Hardware specifics: Intel]]<br />
* [[Hardware specifics: NVidia]]</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=GL_ARB_vertex_buffer_object&diff=1284GL ARB vertex buffer object2005-11-08T16:26:10Z<p>Tom: /* How? */</p>
<hr />
<div>===== What? =====<br />
<br />
This extension provides a mechanism for storing vertex array data in "fast" memory (video or AGP), thereby allowing for significant increases in vertex throughput between the application and the GPU. Similar functionality has since long been exposed by the GL_NV_vertex_array_range and GL_ATI_vertex_array_object extensions. GL_ARB_vertex_buffer_object supersedes these extensions with a new vendor-independent mechanism.<br />
<br />
===== How? =====<br />
<br />
Chuncks of vertex array data are encapsulated in a vertex buffer objects (VBO). A VBO is an opaque handle. Much like a texture object or display list, you can associate data with a VBO but the actual storage location of the data is hidden from you. The API for creating a VBO is very similar to the texture object APIs. A VBO is created using glGenBuffers() and destroyed using glDeleteBuffers(). Before rendering from a VBO, it has to be made active using glBindBuffer().<br />
<br />
Data can be written to a VBO in two ways. The first way is through the GL, using the glBufferData() or glBufferSubData() functions. These functions are conceptually similar to glTexImage*() and glTexSubImage*(). The second option is to obtain a direct pointer to the VBO data and to write the data to this memory area yourself. This technique is called "mapping a buffer". The application calls glMapBuffer() to obtain a pointer to the VBO, then writes data to it, and finishes by calling glUnmapBuffer(). When calling glMapBuffer(), you have to specify the desired access mode that you would like for the data. This can be GL_READ_ONLY, GL_WRITE_ONLY or GL_READ_WRITE. The meanings of these are self-explanatory. Obviously you should not attempt to read from a write-only mapping or vice versa.<br />
<br />
It's important to note that a VBO cannot be rendered from while it is mapped. This restriction exists so that drivers may be free to move the data around as they see fit in order to provide the best performance. This also means that mapping and unmapping the same buffer twice will not necessarily give you back the same pointer. You should never store the pointer to a buffer after you've unmapped that buffer, and you should never pass it to the GL (e.g. to glVertexPointer() or related functions).<br />
<br />
Also note that it's not possible to map a buffer until memory has been allocated for it. To do so, you have to use glBufferData() at least once. If you don't have any data available at buffer creation time, you can pass a null pointer to allocate uninitialized memory. You can then fill this memory later by mapping the buffer or by using glBufferSubData().<br />
<br />
When allocating VBOs, you have to specify your intended usage for them. The first thing you have to specify is what kind of data you will store in the buffer -- this can be either vertex data or element data (i.e. indices). You do this by setting the target parameter of glBufferData() to GL_ARRAY_BUFFER or GL_ELEMENT_ARRAY_BUFFER, respectively. The same target also has to be specified when binding or mapping the buffer. The second thing you have to specify is the access pattern you will use for your buffer.<br />
<br />
The access pattern is specified as a combination of two settings. The first is how often you intend to modify the data. The three possible settings are:<br />
* STATIC: You will specify the data only once, then use it many times without modifying it.<br />
* STREAM: You will modify the data once, then use it once, and repeat this process many times.<br />
* DYNAMIC: You will specify or modify the data repeatedly, and use it repeatedly after each time you do this.<br />
<br />
The second setting indicates what the source and destination of the data will be. There are again three possibilities:<br />
* DRAW: The data is generated by the application and passed to the GL for rendering.<br />
* COPY: The data is generated by the GL, and copied into the VBO to be used for rendering.<br />
* READ: The data is generated by the GL, and read back by the application. It is not used by the GL.<br />
<br />
The usage parameter of glBufferData() is a combination of these two settings, as shown in this table:<br />
<br />
{| border="1" cellpadding="2"<br />
! <br />
! DRAW<br />
! COPY<br />
! READ<br />
|- <br />
! STATIC<br />
| GL_STATIC_DRAW<br />
| GL_STATIC_COPY<br />
| GL_STATIC_READ<br />
|- <br />
! STREAM<br />
| GL_STREAM_DRAW<br />
| GL_STREAM_COPY<br />
| GL_STREAM_READ<br />
|- <br />
! DYNAMIC<br />
| GL_DYNAMIC_DRAW<br />
| GL_DYNAMIC_COPY<br />
| GL_DYNAMIC_READ<br />
|}<br />
<br />
Rendering from a VBO is like rendering from a normal vertex array, and can be done using any of the relevant OpenGL functions such as glDrawArrays(), glDrawElements() or glDrawRangeElements(). There are however two important steps to take. The first is that you have to make your VBO active by calling glBindBuffer(), taking care to bind the buffer to the correct target. The second is that you have to convert your vertex array pointers (glVertexPointer() and related) from direct pointers (which you don't have in the case of a VBO) to pointers that are relative to the start of your VBO. In other words, a pointer to the first byte in the VBO would now become a null pointer.<br />
<br />
===== Why? =====<br />
<br />
As mentioned in the introduction, GL_ARB_vertex_buffer_object offers a vendor-independent alternative to GL_NV_vertex_array_range and GL_ATI_vertex_array_object. These extensions exist because the standard OpenGL vertex array functionality requires vertex array data to reside in system memory. This makes it hard to obtain good vertex throughput from the application to the GPU. By enabling the application to store data directly in graphics memory (video or AGP), the GPU can get much faster access to it. Display lists can serve the same purpose, but the compilation step may take too much time for applications that deal with dynamic data, and the memory overhead is unknown to the developer.<br />
<br />
===== Examples =====<br />
<br />
===== Tips and tricks =====<br />
<br />
* Mapping a buffer may cause your data to be lost.<br />
* Array pointers are only treated as offsets if a buffer object is bound. Binding 0 returns to normal behaviour (i.e. direct pointers).<br />
* Multiple buffers can be mapped simultaneously.<br />
* All vertex attributes do not necessarily need to be in the same buffer object.<br />
<br />
===== References =====<br />
<br />
* [http://oss.sgi.com/projects/ogl-sample/registry/ARB/vertex_buffer_object.txt GL_ARB_vertex_buffer_object specification]<br />
* [http://developer.nvidia.com/docs/IO/4449/SUPP/GDC2003_OGL_BufferObjects.pdf Slides from a GDC2003 presentation about buffer objects]<br />
* [http://oss.sgi.com/projects/ogl-sample/registry/ATI/vertex_array_object.txt GL_ATI_vertex_array_object specification]<br />
* [http://oss.sgi.com/projects/ogl-sample/registry/NV/vertex_array_range.txt GL_NV_vertex_array_range specification]</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=OpenGL_Extension&diff=1282OpenGL Extension2005-11-08T01:33:48Z<p>Tom: </p>
<hr />
<div>=== Introduction to the extension mechanism ===<br />
<br />
=== Vertex submission extensions ===<br />
<br />
* [[GL_ARB_vertex_buffer_object]]<br />
* [[GL_NV_vertex_array_range]]<br />
* [[GL_EXT_compiled_vertex_array]]<br />
<br />
=== Texturing related extensions ===<br />
<br />
=== Programmability extensions ===<br />
<br />
=== Framebuffer related extensions ===</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Main_Page&diff=1281Main Page2005-11-08T01:25:16Z<p>Tom: </p>
<hr />
<div>=== About this Wiki ===<br />
<br />
This Wiki is an attempt to collect answers to frequently asked questions on the OpenGL.org forums. The hope is that by using a Wiki rather than a classic FAQ page, the information contained here will be kept relevant and up to date.<br />
<br />
=== [[Getting started]] ===<br />
<br />
Discusses the things you need to know before you can get started with OpenGL. This includes how to set up OpenGL runtime libraries on your system, as well as information on setting up your development environment.<br />
<br />
=== [[General OpenGL]] ===<br />
<br />
Explains the basics of the OpenGL API and answers the most frequently asked questions about it.<br />
<br />
=== [[OpenGL extensions]] ===<br />
<br />
Introduces OpenGL's extension mechanism, and elaborates on the many extensions that are available.<br />
<br />
=== [[Shading languages]] ===<br />
<br />
Discusses the shading languages available for programmable vertex and fragment processing in OpenGL.<br />
<br />
=== [[Performance]] ===<br />
<br />
Offers various performance guidelines for OpenGL applications.<br />
<br />
=== [[Math and algorithms]] ===<br />
<br />
Offers API-agnostic discussion of 3D application design, rendering techniques, 3D maths, and other topics related to computer graphics.<br />
<br />
=== [[Platform specifics]] ===<br />
<br />
Focuses on OS-dependent issues that OpenGL applications may bump into.<br />
<br />
=== [[Hardware specifics]] ===<br />
<br />
Discusses the peculiarities of the different video cards and drivers that are out there.<br />
<br />
=== [[Related toolkits and APIs]] ===<br />
<br />
Provides an overview of various OpenGL toolkits (GLU, Glut, extension loading libraries, ...), higher-level APIs and other utility libraries.<br />
<br />
=== [[History of OpenGL]] ===<br />
<br />
TBD</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=General_OpenGL&diff=1280General OpenGL2005-11-08T01:24:14Z<p>Tom: </p>
<hr />
<div>=== Using Viewing and Camera Transforms, and gluLookAt() ===<br />
<br />
=== Transformations ===<br />
<br />
=== Clipping, Culling, and Visibility Testing ===<br />
<br />
=== Color ===<br />
<br />
=== The Depth Buffer ===<br />
<br />
=== Drawing Lines over Polygons and Using Polygon Offset ===<br />
<br />
=== Rasterization and Operations on the Framebuffer ===<br />
<br />
=== Transparency, Translucency, and Using Blending ===<br />
<br />
=== Display Lists and Vertex Arrays ===<br />
<br />
=== Using Fonts ===<br />
<br />
=== Lights and Shadows ===<br />
<br />
=== Curves, Surfaces, and Using Evaluators ===<br />
<br />
=== Picking and Using Selection ===<br />
<br />
=== Texture Mapping ===</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Getting_Started&diff=1279Getting Started2005-11-08T01:17:01Z<p>Tom: </p>
<hr />
<div>=== Installing OpenGL runtime libraries ===<br />
<br />
==== Windows ====<br />
<br />
If you are running Windows 98/NT/2000, the OpenGL library has already been installed on your system. Otherwise, download the [ftp://ftp.microsoft.com/softlib/mslfiles/opengl95.exe Windows OpenGL library] from Microsoft.<br />
<br />
This library alone will not give you hardware acceleration for OpenGL, though, so you will need to install the latest drivers for your graphics card:<br />
* [http://www.3dlabs.com 3Dlabs]<br />
* [http://www.ati.com ATI]<br />
* [http://www.intel.com Intel]<br />
* [http://www.nvidia.com NVidia]<br />
<br />
Some sites also distribute beta versions of graphics drivers, which may give you access to bug fixes or new functionality before an official driver release from the manufacturer:<br />
* [http://www.3dchipset.com 3DChipset]<br />
* [http://www.guru3d.com Guru3D]<br />
<br />
==== Linux ====<br />
<br />
==== MacOS ====</div>Tomhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Main_Page&diff=1278Main Page2005-11-08T01:03:16Z<p>Tom: </p>
<hr />
<div>=== About this Wiki ===<br />
<br />
This Wiki is an attempt to collect answers to frequently asked questions on the OpenGL.org forums. The hope is that by using a Wiki rather than a classic FAQ page, the information contained here will be kept relevant and up to date.<br />
<br />
=== [[Getting started]] ===<br />
<br />
Discusses the things you need to know before you can get started with OpenGL. This includes how to set up OpenGL runtime libraries on your system, as well as information on setting up your development environment.<br />
<br />
=== [[General OpenGL]] ===<br />
<br />
Explains the basics of the OpenGL API and answers the most frequently asked questions about it.<br />
<br />
=== [[OpenGL extensions]] ===<br />
<br />
Introduces OpenGL's extension mechanism, and elaborates on the many extensions that are available.<br />
<br />
=== [[Shading languages]] ===<br />
<br />
Discusses the shading languages available for programmable vertex and fragment processing in OpenGL.<br />
<br />
=== [[Math and algorithms]] ===<br />
<br />
Offers API-agnostic discussion of 3D application design, rendering techniques, 3D maths, and other topics related to computer graphics.<br />
<br />
=== [[Platform specifics]] ===<br />
<br />
Focuses on OS-dependent issues that OpenGL applications may bump into.<br />
<br />
=== [[Hardware specifics]] ===<br />
<br />
Discusses the peculiarities of the different video cards and drivers that are out there.<br />
<br />
=== [[Related toolkits and APIs]] ===<br />
<br />
Provides an overview of various OpenGL toolkits (GLU, Glut, extension loading libraries, ...), higher-level APIs and other utility libraries.<br />
<br />
=== [[History of OpenGL]] ===<br />
<br />
TBD</div>Tom