https://www.khronos.org/opengl/wiki_opengl/api.php?action=feedcontributions&user=Dorbie&feedformat=atomOpenGL Wiki - User contributions [en]2020-03-29T03:54:59ZUser contributionsMediaWiki 1.31.6https://www.khronos.org/opengl/wiki_opengl/index.php?title=Math_and_algorithms&diff=1466Math and algorithms2006-04-05T07:26:58Z<p>Dorbie: </p>
<hr />
<div>== [[Calculating a Surface Normal]] ==<br />
<br />
Required from the application by OpenGL for lighting calculations.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Calculating_a_Surface_Normal&diff=1465Calculating a Surface Normal2006-04-05T07:26:43Z<p>Dorbie: </p>
<hr />
<div>A surface normal for a triangle can be calculated by taking the vector cross product of two edges of that triangle. The order of the vertices used in the calculation will affect the direction of the normal (in or out of the face w.r.t. winding).<br />
<br />
So for a triangle p1, p2, p3, if the vector U = p2 - p1 and the vector V = p3 - p1 then the normal N = U X V and can be calculated by:<br />
<br />
Nx = UyVz - UzVy<br />
<br />
Ny = UzVx - UxVz<br />
<br />
Nz = UxVy - UyVx</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Math_and_algorithms&diff=1464Math and algorithms2006-04-05T07:26:01Z<p>Dorbie: /* Calculating a Surface Normal */</p>
<hr />
<div>== [[Calculating a Surface Normal]] ==<br />
<br />
Required from the application by OpenGL for lighting calculations.<br />
<br />
== [[Vector Dot Product]] ==<br />
<br />
Useful in lighting calculations.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Calculating_a_Surface_Normal&diff=1463Calculating a Surface Normal2006-04-05T07:24:48Z<p>Dorbie: </p>
<hr />
<div>A surface normal for a triangle can be calculated by taking the vector cross product of two edges of that triangle. The order of the vertices used in the calculation witt affect the direction of the normal (in or out of the face w.r.t. winding).<br />
<br />
So for a triangle p1, p2, p3, if the vector U = p2 - p1 and the vector V = p3 - p1 then the normal N = U X V and can be calculated by:<br />
<br />
Nx = UyVz - UzVy<br />
<br />
Ny = UzVx - UxVz<br />
<br />
Nz = UxVy - UyVx</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Calculating_a_Surface_Normal&diff=1462Calculating a Surface Normal2006-04-05T07:24:15Z<p>Dorbie: </p>
<hr />
<div>A surface normal for a triangle can be calculated by taking the vector cross product of two edges of that triangle. The order of the vertices used in the calculation witt affect the direction of the normal (in or out of the face w.r.t. winding).<br />
<br />
So for a triangle p1, p2, p3, if the vector U = p2 - p1 and the vector V = p3-p1 then the normal U X V is calculated by:<br />
<br />
Nx = UyVz - UzVy<br />
<br />
Ny = UzVx - UxVz<br />
<br />
Nz = UxVy - UyVx</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Calculating_a_Surface_Normal&diff=1461Calculating a Surface Normal2006-04-05T07:23:58Z<p>Dorbie: </p>
<hr />
<div>A surface normal for a triangle can be calculated by taking the vector cross product of two edges of that triangle. The order of the vertices used in the calculation witt affect the direction of the normal (in or out of the face w.r.t. winding).<br />
<br />
So for a triangle p1, p2, p3, if the vector U = p2 - p1 and the vector V = p3-p1 then the normal U X V is calculated by:<br />
<br />
Nx = UyVz - UzVy<br />
Ny = UzVx - UxVz<br />
Nz = UxVy - UyVx</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Math_and_algorithms&diff=1460Math and algorithms2006-04-05T07:11:46Z<p>Dorbie: </p>
<hr />
<div>== [[Calculating a Surface Normal]] ==<br />
<br />
Required for OpenGL lighting.<br />
<br />
== [[Vector Dot Product]] ==<br />
<br />
Useful in lighting calculations.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Math_and_algorithms&diff=1459Math and algorithms2006-04-05T07:11:13Z<p>Dorbie: </p>
<hr />
<div>== Calculating a Surface Normal ==<br />
<br />
Required for OpenGL lighting.<br />
<br />
== Vector Dot Product ==<br />
<br />
Useful in lighting calculations.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Related_toolkits_and_APIs&diff=1458Related toolkits and APIs2006-04-05T07:08:53Z<p>Dorbie: </p>
<hr />
<div>Many programming interfaces are layered on OpenGL with rich and varied functionality. Not all can interoperate.<br />
<br />
<br />
* GLUT<br />
* GLEW<br />
* GLee<br />
* FLTK<br />
* Open Scene Graph<br />
* OpenSG</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Main_Page&diff=1457Main Page2006-04-05T07:00:22Z<p>Dorbie: /* History of OpenGL */</p>
<hr />
<div>=== About this Wiki ===<br />
<br />
This Wiki is an attempt to collect answers to frequently asked questions on the OpenGL.org forums. The hope is that by using a Wiki rather than a classic FAQ page, the information contained here will be kept relevant and up to date. If you would like to contribute to this wiki please send a request to webmaster(at)opengl(dot)org, stating your opengl.org account your interest and experience level.<br />
<br />
=== [[Getting started]] ===<br />
<br />
Discusses the things you need to know before you can get started with OpenGL. This includes how to set up OpenGL runtime libraries on your system, as well as information on setting up your development environment.<br />
<br />
=== [[General OpenGL]] ===<br />
<br />
Explains the basics of the OpenGL API and answers the most frequently asked questions about it.<br />
<br />
=== [[OpenGL extensions]] ===<br />
<br />
Introduces OpenGL's extension mechanism, and elaborates on the many extensions that are available.<br />
<br />
=== [[Shading languages]] ===<br />
<br />
Discusses the shading languages available for programmable vertex and fragment processing in OpenGL.<br />
<br />
=== [[Performance]] ===<br />
<br />
Offers various performance guidelines for OpenGL applications.<br />
<br />
=== [[Math and algorithms]] ===<br />
<br />
Offers API-agnostic discussion of 3D application design, rendering techniques, 3D maths, and other topics related to computer graphics.<br />
<br />
=== [[Platform specifics]] ===<br />
<br />
Focuses on OS-dependent issues that OpenGL applications may bump into.<br />
<br />
=== [[Hardware specifics]] ===<br />
<br />
Discusses the peculiarities of the different video cards and drivers that are out there.<br />
<br />
=== [[Related toolkits and APIs]] ===<br />
<br />
Provides an overview of various OpenGL toolkits (GLU, Glut, extension loading libraries, ...), higher-level APIs and other utility libraries.<br />
<br />
=== [[History of OpenGL]] ===<br />
<br />
OpenGL 1.0 began life as an Open replacement for Iris GL, and after many releases we have OpenGL 2.0 today.<br />
<br />
=== [[Glossary]] ===</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=History_of_OpenGL&diff=1456History of OpenGL2006-04-05T06:58:36Z<p>Dorbie: </p>
<hr />
<div>OpenGL was first created as an open and reproducable alternative to Iris GL which had been the proprietary graphics API on Silicon Graphics workstations. Although OpenGL was initially similar in some respects to IrisGL the lack of a formal specification and conformance tests made Iris GL unsuitable for broader adoption. Mark Segal and Kurt Akeley authored the OpenGL 1.0 specification which tried to formalize the definition of a useful graphics API and made cross platform non-SGI 3rd party implementation and support viable. One notable omission from version 1.0 of the API was texture objects. IrisGL had definition and bind stages for all sorts of objects including materials, lights, textures and texture environments. OpenGL eschewed these objects in favor of incremental state chabges with the idea that collective changes could be encapsulated in display lists. This has remained the philosophy with the exception that texture objects (glBindTexture) with no distinct definition stage are a key part of the API.<br />
<br />
<br />
OpenGL has been through a number of revisions which have predominantly been incremental additions where extensions to the core API have gradually been incorporated into the main body of the API. For example OpenGL 1.1 added the glBindTexture extension to the core API.<br />
<br />
<br />
As of this date OpenGL 2.0 uis the latest version and incorporates the significant addition of the OpenGL Shading Language, a C like language with which the transformation and fragment shading stages of the pipeline can be programmed.<br />
<br />
Official versions of OpenGL released to date are 1.0, 1.1, 1.2, 1.2.1, 1.3, 1.4, 1.5 and 2.0.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=History_of_OpenGL&diff=1455History of OpenGL2006-04-05T06:55:35Z<p>Dorbie: </p>
<hr />
<div>OpenGL was first created as an open and reproducable alternative to Iris GL which had been the proprietary graphics API on Silicon Graphics workstations. Although OpenGL was initially similar in some respects to IrisGL the lack of a formal specification and conformance tests made Iris GL unsuitable for broader adoption. Kurt Akeley and Mark Segal authored the OpenGL 1.0 specification which tried to formalize the definition of a useful graphics API and made cross platform non-SGI 3rd party implementation and support viable. One notable omission from version 1.0 of the API was texture objects. IrisGL had definition and bind stages for all sorts of objects including materials, lights, textures and texture environments. OpenGL eschewed these objects in favor of incremental state chabges with the idea that collective changes could be encapsulated in display lists. This has remained the philosophy with the exception that texture objects (glBindTexture) with no distinct definition stage are a key part of the API.<br />
<br />
<br />
OpenGL has been through a number of revisions which have predominantly been incremental additiosn where extensions to the core API have gradually been incorporated into the main body of the API. For exampel OpenGL 1.1 added the glBindTexture extension to the core API.<br />
<br />
<br />
As of this date OpenGL 2.0 uis the latest version and incorporates the significant addition of the OpenGL Shading Language, a C like language with which the transformation and fragment shading stages of the pipeline can be programmed.<br />
<br />
Official versions of OpenGL released to date are 1.0, 1.1, 1.2, 1.3, 1.4, 1.5 and 2.0.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=History_of_OpenGL&diff=1454History of OpenGL2006-04-05T06:54:46Z<p>Dorbie: </p>
<hr />
<div>OpenGL was first created as an open and reproducable alternative to Iris GL which had been the proprietary graphics API on Silicon Graphics workstations. Although OpenGL was initially similar in some respects to IrisGL the lack of a formal specification and conformance tests made Iris GL unsuitable for broader adoption. Kurt Akeley and Mark Segal authored the OpenGL 1.0 specification which tried to formalize the definition of a useful graphics API and made cross platform non-SGI 3rd party implementation and support viable. One notable omission from version 1.0 of the API was texture objects. IrisGL had definition and bind stages for all sorts of objects including materials, lights, textures and texture environments. OpenGL eschewed these objects in favor of incremental state chabges with the idea that collective changes could be encapsulated in display lists. This has remained the philosophy with the exception that texture objects (glBindTexture) with no distinct definition stage are a key part of the API.<br />
<br />
The OpenGL has been through a number of revisions which have predominantly been incremental additiosn where extensions to the core API have gradually been incorporated into the main body of the API. For exampel OpenGL 1.1 added the glBindTexture extension to the core API.<br />
<br />
As of this date OpenGL 2.0 uis the latest version and incorporates the significant addition of the OpenGL Shading Language, a C like language with which the transformation and fragment shading stages of the pipeline can be programmed.<br />
<br />
Official versions of OpenGL released to date are 1.0, 1.1, 1.2, 1.3, 1.4, 1.5 and 2.0.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Main_Page&diff=1453Main Page2006-04-05T06:32:38Z<p>Dorbie: /* About this Wiki */</p>
<hr />
<div>=== About this Wiki ===<br />
<br />
This Wiki is an attempt to collect answers to frequently asked questions on the OpenGL.org forums. The hope is that by using a Wiki rather than a classic FAQ page, the information contained here will be kept relevant and up to date. If you would like to contribute to this wiki please send a request to webmaster(at)opengl(dot)org, stating your opengl.org account your interest and experience level.<br />
<br />
=== [[Getting started]] ===<br />
<br />
Discusses the things you need to know before you can get started with OpenGL. This includes how to set up OpenGL runtime libraries on your system, as well as information on setting up your development environment.<br />
<br />
=== [[General OpenGL]] ===<br />
<br />
Explains the basics of the OpenGL API and answers the most frequently asked questions about it.<br />
<br />
=== [[OpenGL extensions]] ===<br />
<br />
Introduces OpenGL's extension mechanism, and elaborates on the many extensions that are available.<br />
<br />
=== [[Shading languages]] ===<br />
<br />
Discusses the shading languages available for programmable vertex and fragment processing in OpenGL.<br />
<br />
=== [[Performance]] ===<br />
<br />
Offers various performance guidelines for OpenGL applications.<br />
<br />
=== [[Math and algorithms]] ===<br />
<br />
Offers API-agnostic discussion of 3D application design, rendering techniques, 3D maths, and other topics related to computer graphics.<br />
<br />
=== [[Platform specifics]] ===<br />
<br />
Focuses on OS-dependent issues that OpenGL applications may bump into.<br />
<br />
=== [[Hardware specifics]] ===<br />
<br />
Discusses the peculiarities of the different video cards and drivers that are out there.<br />
<br />
=== [[Related toolkits and APIs]] ===<br />
<br />
Provides an overview of various OpenGL toolkits (GLU, Glut, extension loading libraries, ...), higher-level APIs and other utility libraries.<br />
<br />
=== [[History of OpenGL]] ===<br />
<br />
TBD<br />
<br />
=== [[Glossary]] ===</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Main_Page&diff=1452Main Page2006-04-05T06:31:54Z<p>Dorbie: /* About this Wiki */</p>
<hr />
<div>=== About this Wiki ===<br />
<br />
This Wiki is an attempt to collect answers to frequently asked questions on the OpenGL.org forums. The hope is that by using a Wiki rather than a classic FAQ page, the information contained here will be kept relevant and up to date. If you would like to contribute to this wiki please send a request to webmaster(at)opengl(dot)org, stating your interest and experience.<br />
<br />
=== [[Getting started]] ===<br />
<br />
Discusses the things you need to know before you can get started with OpenGL. This includes how to set up OpenGL runtime libraries on your system, as well as information on setting up your development environment.<br />
<br />
=== [[General OpenGL]] ===<br />
<br />
Explains the basics of the OpenGL API and answers the most frequently asked questions about it.<br />
<br />
=== [[OpenGL extensions]] ===<br />
<br />
Introduces OpenGL's extension mechanism, and elaborates on the many extensions that are available.<br />
<br />
=== [[Shading languages]] ===<br />
<br />
Discusses the shading languages available for programmable vertex and fragment processing in OpenGL.<br />
<br />
=== [[Performance]] ===<br />
<br />
Offers various performance guidelines for OpenGL applications.<br />
<br />
=== [[Math and algorithms]] ===<br />
<br />
Offers API-agnostic discussion of 3D application design, rendering techniques, 3D maths, and other topics related to computer graphics.<br />
<br />
=== [[Platform specifics]] ===<br />
<br />
Focuses on OS-dependent issues that OpenGL applications may bump into.<br />
<br />
=== [[Hardware specifics]] ===<br />
<br />
Discusses the peculiarities of the different video cards and drivers that are out there.<br />
<br />
=== [[Related toolkits and APIs]] ===<br />
<br />
Provides an overview of various OpenGL toolkits (GLU, Glut, extension loading libraries, ...), higher-level APIs and other utility libraries.<br />
<br />
=== [[History of OpenGL]] ===<br />
<br />
TBD<br />
<br />
=== [[Glossary]] ===</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Shading_languages&diff=1432Shading languages2006-03-29T09:37:37Z<p>Dorbie: /* Shading languages: General */</p>
<hr />
<div>==== [[Shading languages: General]] ====<br />
<br />
All shading languages share common features and pretty much do the same thing with more or less restrictions/flexibility, for example all have vertex and fragment shaders with fixed functionality in between, all support vector types as a fundamental type and all generate interpolated fragments for the fragment program input from the vertex program output. Before delving into the details of any one language one should first understand what a shading language does in general and where it fits/what it replaces in the overall graphics pipeline.<br />
<br />
==== [[Shading languages: vendor-specific assembly-level]] ====<br />
<br />
This section discusses the various vendor-specific shading languages.<br />
<br />
==== [[Shading languages: ARB assembly-level]] ====<br />
<br />
This section discusses ARB_fragment_program and ARB_vertex_program.<br />
<br />
==== [[Shading languages: GLSL]] ====<br />
<br />
This section discusses the OpenGL Shading Language, or GLSL.<br />
<br />
==== [[Shading languages: Cg]] ====<br />
<br />
This section discusses NVidia's Cg language.<br />
<br />
==== [[Shading languages: Which shading language should I use?]] ====<br />
<br />
This section looks at each shading language's pros and cons, to help you decide which one is right for your project.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Shading_languages&diff=1431Shading languages2006-03-29T09:37:21Z<p>Dorbie: /* moved to subsection */</p>
<hr />
<div>==== [[Shading languages: General]] ====<br />
<br />
All shading languages share common features and pretty much do the same thing with more or less restrictions/flexibility, for example all have vertex and fragment shaders with fixed functionality in between, all support vector types as a fundamental type and all generate interpolated fragments for the fragment program input from the vertex program output. Before delving into the details of any one language one should first understand what a shading language does in general and where it fits/what it replaces in the overall graphics pipeline.<br />
<br />
<br />
==== [[Shading languages: vendor-specific assembly-level]] ====<br />
<br />
This section discusses the various vendor-specific shading languages.<br />
<br />
==== [[Shading languages: ARB assembly-level]] ====<br />
<br />
This section discusses ARB_fragment_program and ARB_vertex_program.<br />
<br />
==== [[Shading languages: GLSL]] ====<br />
<br />
This section discusses the OpenGL Shading Language, or GLSL.<br />
<br />
==== [[Shading languages: Cg]] ====<br />
<br />
This section discusses NVidia's Cg language.<br />
<br />
==== [[Shading languages: Which shading language should I use?]] ====<br />
<br />
This section looks at each shading language's pros and cons, to help you decide which one is right for your project.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Shading_languages:_General&diff=1430Shading languages: General2006-03-29T09:36:30Z<p>Dorbie: </p>
<hr />
<div>Shading languages are the interface used to program key parts of the modern graphics pipeline which have previously been fixed function state machines without programmability. With shading languages the vertex transformation and lighting fixed function pipeline is replaced by vertex program instructions supplied by the application, and key parts of the rasterization pipeline, mainly texture environment and fog are replaced by fragment program instructions supplied by the application. The key to understanding shaders is that vertex shaders are fed by graphics primitives like triangles and lines with vertex attributes like color, texture coordinates, position and other generic attributes, for each vertex the program is executed, the output is screen space primitives with similar types of per vertex data to the input. The output of a vertex shader is then transformed to the viewport and clipped by the fixed function pipeline. The primitive is rasterized using prudicing per fragment interpolated values for the results of the vertex shader. The fragment shader program is then executed for each pixel produced by aforementioned interpolation process using the interpolated output of the vertex shader as the input to the fragment shader. The fragment shader outputs color attributes and possibly other outputs like zbuffer depth (outputs supported depend on specific shader language feature support). The output from the fragment shader is depth tested and stencil tested using fixed function hardware and if passed the color is blended with the destination pixel using the fixed function hardware.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Shading_languages&diff=1382Shading languages2006-03-26T05:44:10Z<p>Dorbie: /* Shading languages: General */</p>
<hr />
<div>==== [[Shading languages: General]] ====<br />
<br />
All shading languages share common features and pretty much do the same thing with more or less restrictions/flexibility, for example all have vertex and fragment shaders with fixed functionality in between, all support vector types as a fundamental type and all generate interpolated fragments for the fragment program input from the vertex program output. Before delving into the details of any one language one should first understand what a shading language does in general and where it fits/what it replaces in the overall graphics pipeline.<br />
<br />
== this should go in subsection but I can't create new pages due to a DB error ==<br />
<br />
Shading languages are the interface used to program key parts of the modern graphics pipeline which have previously been fixed function state machines without programmability. With shading languages the vertex transformation and lighting fixed function pipeline is replaced by vertex program instructions supplied by the application, and key parts of the rasterization pipeline, mainly texture environment and fog are replaced by fragment program instructions supplied by the application. The key to understanding shaders is that vertex shaders are fed by graphics primitives like triangles and lines with vertex attributes like color, texture coordinates, position and other generic attributes, for each vertex the program is executed, the output is screen space primitives with similar types of per vertex data to the input. The output of a vertex shader is then transformed to the viewport and clipped by the fixed function pipeline. The primitive is rasterized using prudicing per fragment interpolated values for the results of the vertex shader. The fragment shader program is then executed for each pixel produced by aforementioned interpolation process using the interpolated output of the vertex shader as the input to the fragment shader. The fragment shader outputs color attributes and possibly other outputs like zbuffer depth (outputs supported depend on specific shader language feature support). The output from the fragment shader is depth tested and stencil tested using fixed function hardware and if passed the color is blended with the destination pixel using the fixed function hardware.<br />
<br />
==== [[Shading languages: vendor-specific assembly-level]] ====<br />
<br />
This section discusses the various vendor-specific shading languages.<br />
<br />
==== [[Shading languages: ARB assembly-level]] ====<br />
<br />
This section discusses ARB_fragment_program and ARB_vertex_program.<br />
<br />
==== [[Shading languages: GLSL]] ====<br />
<br />
This section discusses the OpenGL Shading Language, or GLSL.<br />
<br />
==== [[Shading languages: Cg]] ====<br />
<br />
This section discusses NVidia's Cg language.<br />
<br />
==== [[Shading languages: Which shading language should I use?]] ====<br />
<br />
This section looks at each shading language's pros and cons, to help you decide which one is right for your project.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Shading_languages&diff=1381Shading languages2006-03-26T05:39:08Z<p>Dorbie: /* Shading languages: General */</p>
<hr />
<div>==== [[Shading languages: General]] ====<br />
<br />
All shading languages share common features and pretty much do the same thing with more or less restrictions/flexibility, for example all have vertex and fragment shaders with fixed functionality in between, all support vector types as a fundamental type and all generate interpolated fragments for the fragment program input from the vertex program output. Before delving into the details of any one language one should first understand what a shading language does in general and where it fits/what it replaces in the overall graphics pipeline.<br />
<br />
Shading languages are the interface used to program key parts of the modern graphics pipeline which have previously been fixed function state machines without programmability. With shading languages the vertex transformation and lighting fixed function pipeline is replaced by vertex program instructions supplied by the application, and key parts of the rasterization pipeline, mainly texture environment and fog are replaced by fragment program instructions supplied by the application. The key to understanding shaders is that vertex shaders are fed by graphics primitives like triangles and lines with vertex attributes like color, texture coordinates, position and other generic attributes, for each vertex the program is executed, the output is screen space primitives with similar types of per vertex data to the input. The output of a vertex shader is then transformed to the viewport and clipped by the fixed function pipeline. The primitive is rasterized using prudicing per fragment interpolated values for the results of the vertex shader. The fragment shader program is then executed for each pixel produced by aforementioned interpolation process using the interpolated output of the vertex shader as the input to the fragment shader. The fragment shader outputs color attributes and possibly other outputs like zbuffer depth (outputs supported depend on specific shader language feature support). The output from the fragment shader is depth tested and stencil tested using fixed function hardware and if passed the color is blended with the destination pixel using the fixed function hardware.<br />
<br />
==== [[Shading languages: vendor-specific assembly-level]] ====<br />
<br />
This section discusses the various vendor-specific shading languages.<br />
<br />
==== [[Shading languages: ARB assembly-level]] ====<br />
<br />
This section discusses ARB_fragment_program and ARB_vertex_program.<br />
<br />
==== [[Shading languages: GLSL]] ====<br />
<br />
This section discusses the OpenGL Shading Language, or GLSL.<br />
<br />
==== [[Shading languages: Cg]] ====<br />
<br />
This section discusses NVidia's Cg language.<br />
<br />
==== [[Shading languages: Which shading language should I use?]] ====<br />
<br />
This section looks at each shading language's pros and cons, to help you decide which one is right for your project.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1363Viewing and Transformations2006-03-06T06:57:18Z<p>Dorbie: /* How does the camera work in OpenGL? */</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0.0, 0.0, 0.0). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation by placing it on the MODELVIEW matrix. This is commonly referred to as the viewing transformation.<br />
<br />
In practice this is mathematically equivalent to a camera transformation but more efficient because model transformations and camera transformations are concatenated to a single matrix. As a result though, certain operations must be performed when the camera and only the camera is on the MODELVIEW matrix. For example to position a light source in world space it most be positioned while the viewing transformation and only the viewing transformation is applied to the MODELVIEW matrix.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt game dev FAQ] has good information on these two matrices.<br />
<br />
Read Steve Baker's article on [http://sjbaker.org/steve/omniv/projection_abuse.html projection abuse]. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Often this is caused by placing the transformation on the wrong matrix, look at question 3 for an explanation of this problem. However occasionally, and in particular for lighting, this can be caused because the lights were positioned when the wrong matrix is on the MODELVIEW matrix stack. When a light is positioned in eye space, i.e. relative to the eye, it should be positioned when an identity matrix is on the MODELVIEW stack. When a light is positioned in the world it should be positioned when the viewing matrix is on the MODELVIEW stack. When the light is positioned relative to an object under transformation it should be positioned when that object's model matrix has been multiplied with the viewing matrix on the MODELVIEW stack, remembering that it will have to be positioned before anything lit by it is rendered. If any light moves relative to the eye between frames it must be repositioned each frame using the appropriate matrix.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Stereo viewing is accomplished by presenting a different image to the left and right eyes of the viewer. These images must be appropriate for the viewers relationship to the display they are looking at, much moreso than a mono 3D image. In addition the method used is tied closely to the display technology being used. Some graphics systems and display devices support stereo viewing in hardware and support features like left and right framebuffers in addition to the front and back buffers of conventional double buffered systems. Other systems support stereo correctly when two viewports are created in specific screen regions and specific video mode is used to send these to the screen. In conjunction with these modes a viewer often wears glasses either shuttered or polarized to select the displayed image appropriate to each eye. However even without these graphics features a developer can generate stereo views using features like color filtering where colored filters select an image based on red or blue filters and draw left and right eye images to red and blue framebuffer components for example, or even more simply just have multiple systems or graphics cards (or even a single card) generate two entirely separate video signals, for which a separate left and right eye image is drawn. The video is then sent to the appropriate eye either using a display employing polarizing filters or a head mounted display or some other custom display operating on similar principals.<br />
<br />
From an OpenGL perspective, the requirements of stereo rendering are to use the appropriate setup to render to left and right eyes (be it color masks, separate contexts or different viewports) and then match the geometry of the OpenGL projection to the relationship of the viewer's left and right eyes with the display. The final OpenGL requirement is that the position of the eyes in the 'virtual' world must be given a pupil separation on the modelview stack, this separation would of course be a translation in eye space, but could be calculated in other equivalent ways.<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/stereogl/ 3D Stereo Rendering Using OpenGL]<br />
* [http://www.swin.edu.au/astronomy/pbourke/stereographics/stereorender/ Calculating Stereo Pairs]<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/redblue/ Creating Anaglyphs using OpenGL]</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Getting_Started&diff=1362Getting Started2006-03-06T06:55:33Z<p>Dorbie: /* Linux */</p>
<hr />
<div>=== Installing OpenGL runtime libraries ===<br />
<br />
==== Windows ====<br />
<br />
If you are running Windows 98/NT/2000, the OpenGL library has already been installed on your system. Otherwise, download the [ftp://ftp.microsoft.com/softlib/mslfiles/opengl95.exe Windows OpenGL library] from Microsoft.<br />
<br />
This library alone will not give you hardware acceleration for OpenGL, though, so you will need to install the latest drivers for your graphics card:<br />
* [http://www.3dlabs.com 3Dlabs]<br />
* [http://www.ati.com ATI]<br />
* [http://www.intel.com Intel]<br />
* [http://www.nvidia.com NVidia]<br />
<br />
Some sites also distribute beta versions of graphics drivers, which may give you access to bug fixes or new functionality before an official driver release from the manufacturer:<br />
* [http://www.3dchipset.com 3DChipset]<br />
* [http://www.guru3d.com Guru3D]<br />
<br />
==== Linux ====<br />
<br />
Graphics on Linux is almost exclusively implemented using the X windows system. Supporting OpenGL on Linux involves using GLX extensions to the X Server. There is a standard Application Binary Interface defined for OpenGL on Linux that gives application compatability for OpenGL for a range of drivers. In addition the Direct Rendering Infrastucture (DRI) is a driver framework that allows drivers to be written and interoperate within a standard framework to easily support hardware acceleration, the DRI is included in of XFree86 4.0 but may need a card specific dirver to be configured after installation.<br />
<br />
Vendors have different approaches to drivers on Linux, some support Open Source efforts using the DRI, and others support closed source frameworks but all methods support the standard ABI that will allow correctly written OpenGL applications to run on Linux.<br />
<br />
* [http://www.nvidia.com/object/unix.html Nvidia]<br />
* [http://www.faqs.org/docs/Linux-mini/Nvidia-OpenGL-Configuration.html Nvidia HOWTO (old)]<br />
* [https://support.ati.com/ics/support/KBAnswer.asp?questionID=3380 ATI]<br />
<br />
==== MacOS ====</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Getting_Started&diff=1361Getting Started2006-03-06T06:52:11Z<p>Dorbie: /* Linux */</p>
<hr />
<div>=== Installing OpenGL runtime libraries ===<br />
<br />
==== Windows ====<br />
<br />
If you are running Windows 98/NT/2000, the OpenGL library has already been installed on your system. Otherwise, download the [ftp://ftp.microsoft.com/softlib/mslfiles/opengl95.exe Windows OpenGL library] from Microsoft.<br />
<br />
This library alone will not give you hardware acceleration for OpenGL, though, so you will need to install the latest drivers for your graphics card:<br />
* [http://www.3dlabs.com 3Dlabs]<br />
* [http://www.ati.com ATI]<br />
* [http://www.intel.com Intel]<br />
* [http://www.nvidia.com NVidia]<br />
<br />
Some sites also distribute beta versions of graphics drivers, which may give you access to bug fixes or new functionality before an official driver release from the manufacturer:<br />
* [http://www.3dchipset.com 3DChipset]<br />
* [http://www.guru3d.com Guru3D]<br />
<br />
==== Linux ====<br />
<br />
Graphics on Linux is almost exclusively implemented using the X windows system. Supporting OpenGL on Linux involves using GLX extensions to the X Server. There is a standard Application Binary Interface defined for OpenGL on Linux that gives application compatability for OpenGL for a range of drivers. In addition the Direct Rendering Infrastucture (DRI) is a driver framework that allows drivers to be written and interoperate within a standard framework to easily support hardware acceleration, the DRI is included in of XFree86 4.0 but may need a card specific dirver to be configured after installation.<br />
<br />
Vendors have different approaches to drivers on Linux, some support Open Source efforts using the DRI, and others support closed source frameworks but all methods support the standard ABI that will allow correctly written 3D applications to run on Linux.<br />
<br />
* [http://www.nvidia.com/object/unix.html Nvidia]<br />
* [http://www.faqs.org/docs/Linux-mini/Nvidia-OpenGL-Configuration.html Nvidia HOWTO (old)]<br />
* [https://support.ati.com/ics/support/KBAnswer.asp?questionID=3380 ATI]<br />
<br />
==== MacOS ====</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Getting_Started&diff=1360Getting Started2006-03-06T06:49:57Z<p>Dorbie: </p>
<hr />
<div>=== Installing OpenGL runtime libraries ===<br />
<br />
==== Windows ====<br />
<br />
If you are running Windows 98/NT/2000, the OpenGL library has already been installed on your system. Otherwise, download the [ftp://ftp.microsoft.com/softlib/mslfiles/opengl95.exe Windows OpenGL library] from Microsoft.<br />
<br />
This library alone will not give you hardware acceleration for OpenGL, though, so you will need to install the latest drivers for your graphics card:<br />
* [http://www.3dlabs.com 3Dlabs]<br />
* [http://www.ati.com ATI]<br />
* [http://www.intel.com Intel]<br />
* [http://www.nvidia.com NVidia]<br />
<br />
Some sites also distribute beta versions of graphics drivers, which may give you access to bug fixes or new functionality before an official driver release from the manufacturer:<br />
* [http://www.3dchipset.com 3DChipset]<br />
* [http://www.guru3d.com Guru3D]<br />
<br />
==== Linux ====<br />
<br />
Graphics on Linux is almost exclusively implemented using the X windows system. Supporting OpenGL on Linux involves using GLX extensions to the X Server. There is a standard Application Binary Interface defined for OpenGL on Linux that gives application compatability for OpenGL for a range of drivers. In addition the Direct Rendering Infrastucture (DRI) is a driver framework that allows drivers to be written and interoperate within a standard framework to easily support hardware acceleration, the DRI is included in of XFree86 4.0 but may need a card specific dirver to be configured after installation.<br />
<br />
Vendors have different approaches to drivers on Linux, some support Open Source efforts using the DRI, and others support closed source frameworks but all methods support the standard ABI that will allow correctly written 3D applications to run on Linux.<br />
<br />
* [http://www.nvidia.com/object/unix.html Nvidia]<br />
* [http://www.faqs.org/docs/Linux-mini/Nvidia-OpenGL-Configuration.html Nvidia-HOWTO]<br />
* [https://support.ati.com/ics/support/default.asp?deptID=894&task=knowledge&folderID=27 ATI]<br />
<br />
<br />
<br />
==== MacOS ====</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Getting_Started&diff=1359Getting Started2006-03-06T06:46:08Z<p>Dorbie: /* Linux */</p>
<hr />
<div>=== Installing OpenGL runtime libraries ===<br />
<br />
==== Windows ====<br />
<br />
If you are running Windows 98/NT/2000, the OpenGL library has already been installed on your system. Otherwise, download the [ftp://ftp.microsoft.com/softlib/mslfiles/opengl95.exe Windows OpenGL library] from Microsoft.<br />
<br />
This library alone will not give you hardware acceleration for OpenGL, though, so you will need to install the latest drivers for your graphics card:<br />
* [http://www.3dlabs.com 3Dlabs]<br />
* [http://www.ati.com ATI]<br />
* [http://www.intel.com Intel]<br />
* [http://www.nvidia.com NVidia]<br />
<br />
Some sites also distribute beta versions of graphics drivers, which may give you access to bug fixes or new functionality before an official driver release from the manufacturer:<br />
* [http://www.3dchipset.com 3DChipset]<br />
* [http://www.guru3d.com Guru3D]<br />
<br />
==== Linux ====<br />
<br />
Graphics on Linux is almost exclusively implemented using the X windows system. Supporting OpenGL on Linux involves using GLX extensions to the X Server. There is a standard Application Binary Interface defined for OpenGL on Linux that gives application compatability for OpenGL for a range of drivers. In addition the Direct Rendering Infrastucture (DRI) is a driver framework that allows drivers to be written and interoperate within a standard framework to easily support hardware acceleration, the DRI is included in of XFree86 4.0 but may need a card specific dirver to be configured after installation.<br />
<br />
Vendors have different approaches to drivers on Linux, some support Open Source efforts using the DRI, and others support closed source frameworks but all methods support the standard ABI that will allow correctly written 3D applications to run on Linux.<br />
<br />
==== MacOS ====</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Getting_Started&diff=1358Getting Started2006-03-06T06:42:06Z<p>Dorbie: /* Linux */</p>
<hr />
<div>=== Installing OpenGL runtime libraries ===<br />
<br />
==== Windows ====<br />
<br />
If you are running Windows 98/NT/2000, the OpenGL library has already been installed on your system. Otherwise, download the [ftp://ftp.microsoft.com/softlib/mslfiles/opengl95.exe Windows OpenGL library] from Microsoft.<br />
<br />
This library alone will not give you hardware acceleration for OpenGL, though, so you will need to install the latest drivers for your graphics card:<br />
* [http://www.3dlabs.com 3Dlabs]<br />
* [http://www.ati.com ATI]<br />
* [http://www.intel.com Intel]<br />
* [http://www.nvidia.com NVidia]<br />
<br />
Some sites also distribute beta versions of graphics drivers, which may give you access to bug fixes or new functionality before an official driver release from the manufacturer:<br />
* [http://www.3dchipset.com 3DChipset]<br />
* [http://www.guru3d.com Guru3D]<br />
<br />
==== Linux ====<br />
<br />
Graphics on Linux is almost exclusively implemented using the X windows system. Supporting OpenGL on Linux involves using GLX extensions to the X Server. There is a standard Application Binary Interface defined for OpenGL on Linux that gives appclication compatability for OpenGL for a range of drivers. In addition the Direct Rendering Infrastucture is a driver framework that allows drivers to be written against this standard framework to easily support hardware acceleration.<br />
<br />
Vendors have different approaches to drivers on Linux, some support Open Source efforts using the DRI, and others support closed source frameworks but all methods support the standard ABI that will allow correctly written 3D applications to run on Linux.<br />
<br />
==== MacOS ====</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Getting_Started&diff=1357Getting Started2006-03-06T06:41:39Z<p>Dorbie: /* Linux */</p>
<hr />
<div>=== Installing OpenGL runtime libraries ===<br />
<br />
==== Windows ====<br />
<br />
If you are running Windows 98/NT/2000, the OpenGL library has already been installed on your system. Otherwise, download the [ftp://ftp.microsoft.com/softlib/mslfiles/opengl95.exe Windows OpenGL library] from Microsoft.<br />
<br />
This library alone will not give you hardware acceleration for OpenGL, though, so you will need to install the latest drivers for your graphics card:<br />
* [http://www.3dlabs.com 3Dlabs]<br />
* [http://www.ati.com ATI]<br />
* [http://www.intel.com Intel]<br />
* [http://www.nvidia.com NVidia]<br />
<br />
Some sites also distribute beta versions of graphics drivers, which may give you access to bug fixes or new functionality before an official driver release from the manufacturer:<br />
* [http://www.3dchipset.com 3DChipset]<br />
* [http://www.guru3d.com Guru3D]<br />
<br />
==== Linux ====<br />
<br />
Graphics on Linux is almost exclusively implemented using the X windows system. Supporting OpenGL on Linux involves using GLX extensions to the X Server. There is a standard Application Binary Interface defined for OpenGL on Linux that gives appklication compatability for OpenGL for a range of drivers. In addition the Direct Rendering Infrastucture is a driver framework that allows drivers to be written against this standard framework to easily support hardware acceleration.<br />
<br />
Vendors have different approaches to drivers on Linux, some support Open Source efforts using the DRI, and others support closed source frameworks but all methods support the standard ABI that will allow correctly written 3D applications to run on Linux.<br />
<br />
==== MacOS ====</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=OpenGL_Extension&diff=1312OpenGL Extension2006-03-03T09:52:15Z<p>Dorbie: /* Introduction to the extension mechanism */</p>
<hr />
<div>=== Introduction to the extension mechanism ===<br />
<br />
Each release of OpenGL represents a core of graphics functionality and API calls which must be supported by any vendor claiming to support OpenGL. However this core of API functionality does not prevent individual implementors or groups of implementors from adding new features and API calls. In fact even at the time of a core release some official optional graphics capabilities may also be specified. All these categories enhanced functionality and the associated API calls and tokens are referred to as extensions. Each OpenGL extension is carefully specified in the context of the broader OpenGL specification and there are runtime checks that can be used to query the existence of any extension and generate the appropriate function call.<br />
<br />
=== Vertex submission extensions ===<br />
<br />
* [[GL_ARB_vertex_buffer_object]]<br />
* [[GL_NV_vertex_array_range]]<br />
* [[GL_EXT_compiled_vertex_array]]<br />
<br />
=== Texturing related extensions ===<br />
<br />
* [[GL_ARB_texture_env_combine]]<br />
* [[GL_ARB_texture_compression]]<br />
<br />
=== Programmability extensions ===<br />
<br />
* [[GL_ARB_vertex_program]]<br />
* [[GL_ARB_fragment_program]]<br />
<br />
=== Framebuffer related extensions ===<br />
<br />
* [[GL_ARB_draw_buffers]]<br />
* [[GL_EXT_framebuffer_object]]</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Shading_languages&diff=1311Shading languages2006-03-03T09:44:18Z<p>Dorbie: /* Shading languages: General */</p>
<hr />
<div>==== [[Shading languages: General]] ====<br />
<br />
All shading languages share common features and pretty much do the same thing with more or less restrictions/flexibility, for example all have vertex and fragment shaders with fixed functionality in between, all support vector types as a fundamental type and all generate interpolated fragments for the fragment program input from the vertex program output. Before delving into the details of any one language one should first understand what a shading language does in general and where it fits/what it replaces in the overall graphics pipeline.<br />
<br />
==== [[Shading languages: vendor-specific assembly-level]] ====<br />
<br />
This section discusses the various vendor-specific shading languages.<br />
<br />
==== [[Shading languages: ARB assembly-level]] ====<br />
<br />
This section discusses ARB_fragment_program and ARB_vertex_program.<br />
<br />
==== [[Shading languages: GLSL]] ====<br />
<br />
This section discusses the OpenGL Shading Language, or GLSL.<br />
<br />
==== [[Shading languages: Cg]] ====<br />
<br />
This section discusses NVidia's Cg language.<br />
<br />
==== [[Shading languages: Which shading language should I use?]] ====<br />
<br />
This section looks at each shading language's pros and cons, to help you decide which one is right for your project.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Shading_languages&diff=1310Shading languages2006-03-03T09:42:33Z<p>Dorbie: General shader discussion section, this should cover a shader's place in the pipeline</p>
<hr />
<div>==== [[Shading languages: General]] ====<br />
<br />
All shading languages share common features and pretty much do the same thing with more or less restrictions/flexibility, for example all have vertex and fragment shaders with fixed functionality in between, all support vector types as a fundamental type and all generate interpolated fragments for the fragment program input from the vertex program output. Before delving into the details of any one language one should first understand what a shading language replaces in the graphics pipeline.<br />
<br />
==== [[Shading languages: vendor-specific assembly-level]] ====<br />
<br />
This section discusses the various vendor-specific shading languages.<br />
<br />
==== [[Shading languages: ARB assembly-level]] ====<br />
<br />
This section discusses ARB_fragment_program and ARB_vertex_program.<br />
<br />
==== [[Shading languages: GLSL]] ====<br />
<br />
This section discusses the OpenGL Shading Language, or GLSL.<br />
<br />
==== [[Shading languages: Cg]] ====<br />
<br />
This section discusses NVidia's Cg language.<br />
<br />
==== [[Shading languages: Which shading language should I use?]] ====<br />
<br />
This section looks at each shading language's pros and cons, to help you decide which one is right for your project.</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1309Viewing and Transformations2006-03-03T09:34:16Z<p>Dorbie: /* I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? */</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0.0, 0.0, 0.0). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation by placing it on the MODELVIEW matrix. This is commonly referred to as the viewing transformation.<br />
<br />
In practice this is mathematically equivalent to a camera transformation but more efficient because model transformations and camera transformations are concatenated to a single matrix. As a result though, certain operations must be performed when the camera and only the camera is on the MODELVIEW matrix. For example to position a light source in world space it most be positioned while the viewing transformation and only the viewing transformation is applied to the MODELVIEW matrix.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt game dev FAQ] has good information on these two matrices.<br />
<br />
Read Steve Baker's article on [http://sjbaker.org/steve/omniv/projection_abuse.html projection abuse]. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Often this is caused by placing the transformation on the wrong matrix, look at question 3 for an explanation of this problem. However occasionally, and in particular for lighting, this can be caused because the lights were positioned when the wrong matrix is on the MODELVIEW matrix stack. When a light is positioned in eye space, i.e. relative to the eye, it should be positioned when an identity matrix is on the MODELVIEW stack. When a light is positioned in the world it should be positioned when the viewing matrix is on the MODELVIEW stack. When the light is positioned relative to an object under transformation it should be positioned when that object's model matrix has been multiplied with the viewing matrix on the MODELVIEW stack, remembering that it will have to be positioned before anything lit by it is rendered. If any light moves relative to the eye between frames it must be repositioned each frame using the appropriate matrix.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Stereo viewing is accomplished by presenting a different image to the left and right eyes of the viewer. These images must be appropriate for the viewers relationship to the display they are looking at, much moreso than a mono 3D image. In addition the method used is tied closely to the display technology being used. Some graphics systems and display devices support stereo viewing in hardware and support features like left and right framebuffers in addition to the front and back buffers of conventional double buffered systems. Other systems support stereo correctly when two viewports are created in specific screen regions and specific video mode is used to send these to the screen. In conjunction with these modes a viewer often wears glasses either shuttered or polarized to select the displayed image appropriate to each eye. However even without these graphics features a developer can generate stereo views using features like color filtering where colored filters select an image based on red or blue filters and draw left and right eye images to red and blue framebuffer components for example, or even more simply just have multiple systems or graphics cards (or even a single card) generate two entirely separate video signals, for which a separate left and right eye image is drawn. The video is then sent to the appropriate eye either using a display employing polarizing filters or a head mounted display or some other custom display operating on similar principals.<br />
<br />
From an OpenGL perspective, the requirements of stereo rendering are to use the appropriate setup to render to left and right eyes (be it color masks, separate contexts or different viewports) and then match the geometry of the OpenGL projection to the relationship of the viewer's left and right eyes with the display. The final OpenGL requirement is that the position of the eyes in the 'virtual' world must be given a pupil separation on the modelview stack, this separation would of course be a translation in eye space, but could be calculated in other equivalent ways.<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/stereogl/ 3D Stereo Rendering Using OpenGL]<br />
* [http://www.swin.edu.au/astronomy/pbourke/stereographics/stereorender/ Calculating Stereo Pairs]<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/redblue/ Creating Anaglyphs using OpenGL]</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1308Viewing and Transformations2006-03-03T09:27:41Z<p>Dorbie: /* How can I create a stereo view? */</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0.0, 0.0, 0.0). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation by placing it on the MODELVIEW matrix. This is commonly referred to as the viewing transformation.<br />
<br />
In practice this is mathematically equivalent to a camera transformation but more efficient because model transformations and camera transformations are concatenated to a single matrix. As a result though, certain operations must be performed when the camera and only the camera is on the MODELVIEW matrix. For example to position a light source in world space it most be positioned while the viewing transformation and only the viewing transformation is applied to the MODELVIEW matrix.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt game dev FAQ] has good information on these two matrices.<br />
<br />
Read Steve Baker's article on [http://sjbaker.org/steve/omniv/projection_abuse.html projection abuse]. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Look at question 3 for an explanation of this problem.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Stereo viewing is accomplished by presenting a different image to the left and right eyes of the viewer. These images must be appropriate for the viewers relationship to the display they are looking at, much moreso than a mono 3D image. In addition the method used is tied closely to the display technology being used. Some graphics systems and display devices support stereo viewing in hardware and support features like left and right framebuffers in addition to the front and back buffers of conventional double buffered systems. Other systems support stereo correctly when two viewports are created in specific screen regions and specific video mode is used to send these to the screen. In conjunction with these modes a viewer often wears glasses either shuttered or polarized to select the displayed image appropriate to each eye. However even without these graphics features a developer can generate stereo views using features like color filtering where colored filters select an image based on red or blue filters and draw left and right eye images to red and blue framebuffer components for example, or even more simply just have multiple systems or graphics cards (or even a single card) generate two entirely separate video signals, for which a separate left and right eye image is drawn. The video is then sent to the appropriate eye either using a display employing polarizing filters or a head mounted display or some other custom display operating on similar principals.<br />
<br />
From an OpenGL perspective, the requirements of stereo rendering are to use the appropriate setup to render to left and right eyes (be it color masks, separate contexts or different viewports) and then match the geometry of the OpenGL projection to the relationship of the viewer's left and right eyes with the display. The final OpenGL requirement is that the position of the eyes in the 'virtual' world must be given a pupil separation on the modelview stack, this separation would of course be a translation in eye space, but could be calculated in other equivalent ways.<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/stereogl/ 3D Stereo Rendering Using OpenGL]<br />
* [http://www.swin.edu.au/astronomy/pbourke/stereographics/stereorender/ Calculating Stereo Pairs]<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/redblue/ Creating Anaglyphs using OpenGL]</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1307Viewing and Transformations2006-03-03T09:09:47Z<p>Dorbie: updated Steve Baker's URL (link was broken)</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0.0, 0.0, 0.0). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation by placing it on the MODELVIEW matrix. This is commonly referred to as the viewing transformation.<br />
<br />
In practice this is mathematically equivalent to a camera transformation but more efficient because model transformations and camera transformations are concatenated to a single matrix. As a result though, certain operations must be performed when the camera and only the camera is on the MODELVIEW matrix. For example to position a light source in world space it most be positioned while the viewing transformation and only the viewing transformation is applied to the MODELVIEW matrix.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt game dev FAQ] has good information on these two matrices.<br />
<br />
Read Steve Baker's article on [http://sjbaker.org/steve/omniv/projection_abuse.html projection abuse]. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Look at question 3 for an explanation of this problem.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/stereogl/ 3D Stereo Rendering Using OpenGL]<br />
* [http://www.swin.edu.au/astronomy/pbourke/stereographics/stereorender/ Calculating Stereo Pairs]<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/redblue/ Creating Anaglyphs using OpenGL]</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1306Viewing and Transformations2006-03-03T09:07:12Z<p>Dorbie: /* How does the camera work in OpenGL? */</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0.0, 0.0, 0.0). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation by placing it on the MODELVIEW matrix. This is commonly referred to as the viewing transformation.<br />
<br />
In practice this is mathematically equivalent to a camera transformation but more efficient because model transformations and camera transformations are concatenated to a single matrix. As a result though, certain operations must be performed when the camera and only the camera is on the MODELVIEW matrix. For example to position a light source in world space it most be positioned while the viewing transformation and only the viewing transformation is applied to the MODELVIEW matrix.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt game dev FAQ] has good information on these two matrices.<br />
<br />
Read Steve Baker's article on [http://web2.airmail.net/sjbaker1/projection_abuse.html projection abuse]. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Look at question 3 for an explanation of this problem.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/stereogl/ 3D Stereo Rendering Using OpenGL]<br />
* [http://www.swin.edu.au/astronomy/pbourke/stereographics/stereorender/ Calculating Stereo Pairs]<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/redblue/ Creating Anaglyphs using OpenGL]</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1305Viewing and Transformations2006-03-03T04:59:48Z<p>Dorbie: /* How does the camera work in OpenGL? */</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0.0, 0.0, 0.0). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation. This is commonly referred to as the viewing transformation.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt game dev FAQ] has good information on these two matrices.<br />
<br />
Read Steve Baker's article on [http://web2.airmail.net/sjbaker1/projection_abuse.html projection abuse]. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Look at question 3 for an explanation of this problem.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/stereogl/ 3D Stereo Rendering Using OpenGL]<br />
* [http://www.swin.edu.au/astronomy/pbourke/stereographics/stereorender/ Calculating Stereo Pairs]<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/redblue/ Creating Anaglyphs using OpenGL]</div>Dorbiehttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Viewing_and_Transformations&diff=1304Viewing and Transformations2006-03-03T04:59:14Z<p>Dorbie: /* How does the camera work in OpenGL? */</p>
<hr />
<div>===== How does the camera work in OpenGL? =====<br />
<br />
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation. This is commonly referred to as the viewing transformation.<br />
<br />
===== How can I move my eye, or camera, in my scene? =====<br />
<br />
OpenGL doesn't provide an interface to do this using a camera model. However, the GLU library provides the gluLookAt() function, which takes an eye position, a position to look at, and an up vector, all in object space coordinates. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.<br />
<br />
===== Where should my camera go, the ModelView or Projection matrix? =====<br />
<br />
The GL_PROJECTION matrix should contain only the projection transformation calls it needs to transform eye space coordinates into clip coordinates.<br />
<br />
The GL_MODELVIEW matrix, as its name implies, should contain modeling and viewing transformations, which transform object space coordinates into eye space coordinates. Remember to place the camera transformations on the GL_MODELVIEW matrix and never on the GL_PROJECTION matrix.<br />
<br />
Think of the projection matrix as describing the attributes of your camera, such as field of view, focal length, fish eye lens, etc. Think of the ModelView matrix as where you stand with the camera and the direction you point it.<br />
<br />
The [http://www.3dgamedev.com/resources/openglfaq.txt game dev FAQ] has good information on these two matrices.<br />
<br />
Read Steve Baker's article on [http://web2.airmail.net/sjbaker1/projection_abuse.html projection abuse]. This article is highly recommended and well-written. It's helped several new OpenGL programmers.<br />
<br />
===== How do I implement a zoom operation? =====<br />
<br />
A simple method for zooming is to use a uniform scale on the ModelView matrix. However, this often results in clipping by the zNear and zFar clipping planes if the model is scaled too large.<br />
<br />
A better method is to restrict the width and height of the view volume in the Projection matrix.<br />
<br />
For example, your program might maintain a zoom factor based on user input, which is a floating-point number. When set to a value of 1.0, no zooming takes place. Larger values result in greater zooming or a more restricted field of view, while smaller values cause the opposite to occur. Code to create this effect might look like:<br />
<br />
<pre> static float zoomFactor; /* Global, if you want. Modified by user input. Initially 1.0 */<br />
<br />
/* A routine for setting the projection matrix. May be called from a resize<br />
event handler in a typical application. Takes integer width and height <br />
dimensions of the drawing area. Creates a projection matrix with correct<br />
aspect ratio and zoom factor. */<br />
void setProjectionMatrix (int width, int height)<br />
{<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective (50.0*zoomFactor, (float)width/(float)height, zNear, zFar);<br />
/* ...Where 'zNear' and 'zFar' are up to you to fill in. */<br />
}</pre><br />
<br />
Instead of gluPerspective(), your application might use glFrustum(). This gets tricky, because the left, right, bottom, and top parameters, along with the zNear plane distance, also affect the field of view. Assuming you desire to keep a constant zNear plane distance (a reasonable assumption), glFrustum() code might look like this:<br />
<br />
<pre> glFrustum(left*zoomFactor, right*zoomFactor,<br />
bottom*zoomFactor, top*zoomFactor,<br />
zNear, zFar);</pre><br />
<br />
glOrtho() is similar.<br />
<br />
===== Given the current ModelView matrix, how can I determine the object-space location of the camera? =====<br />
<br />
The "camera" or viewpoint is at (0., 0., 0.) in eye space. When you turn this into a vector [0 0 0 1] and multiply it by the inverse of the ModelView matrix, the resulting vector is the object-space location of the camera.<br />
<br />
OpenGL doesn't let you inquire (through a glGet* routine) the inverse of the ModelView matrix. You'll need to compute the inverse with your own code.<br />
<br />
===== How do I make the camera "orbit" around a point in my scene? =====<br />
<br />
You can simulate an orbit by translating/rotating the scene/object and leaving your camera in the same place. For example, to orbit an object placed somewhere on the Y axis, while continuously looking at the origin, you might do this:<br />
<br />
<pre> gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */<br />
0, 0, 0, /* look at the origin */<br />
0, 1, 0); /* positive Y up vector */<br />
glRotatef(orbitDegrees, 0.f, 1.f, 0.f);/* orbit the Y axis */<br />
/* ...where orbitDegrees is derived from mouse motion */<br />
<br />
glCallList(SCENE); /* draw the scene */</pre><br />
<br />
If you insist on physically orbiting the camera position, you'll need to transform the current camera position vector before using it in your viewing transformations. <br />
<br />
In either event, I recommend you investigate gluLookAt() (if you aren't using this routine already).<br />
<br />
===== How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) =====<br />
<br />
The following is from a posting by Dave Shreiner on setting up a basic viewing system:<br />
<br />
First, compute a bounding sphere for all objects in your scene. This should provide you with two bits of information: the center of the sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it "diam").<br />
<br />
Next, choose a value for the zNear clipping plane. General guidelines are to choose something larger than, but close to 1.0. So, let's say you set<br />
<br />
<pre> zNear = 1.0;<br />
zFar = zNear + diam;</pre><br />
<br />
Structure your matrix calls in this order (for an Orthographic projection):<br />
<br />
<pre> GLdouble left = c.x - diam;<br />
GLdouble right = c.x + diam;<br />
GLdouble bottom c.y - diam;<br />
GLdouble top = c.y + diam;<br />
<br />
glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
glOrtho(left, right, bottom, top, zNear, zFar);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();</pre><br />
<br />
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a window with aspect ratio = 1.0). If your window isn't square, compute left, right, bottom, and top, as above, and put in the following logic before the call to glOrtho():<br />
<br />
<pre> GLdouble aspect = (GLdouble) windowWidth / windowHeight;<br />
<br />
if ( aspect < 1.0 ) { // window taller than wide<br />
bottom /= aspect;<br />
top /= aspect;<br />
} else {<br />
left *= aspect;<br />
right *= aspect;<br />
}</pre><br />
<br />
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you need to add a viewing transform to it.<br />
<br />
A typical viewing transform will go on the ModelView matrix and might look like this:<br />
<br />
<pre> gluLookAt(0., 0., 2.*diam, c.x, c.y, c.z, 0.0, 1.0, 0.0);</pre><br />
<br />
===== Why doesn't gluLookAt work? =====<br />
<br />
This is usually caused by incorrect transformations.<br />
<br />
Assuming you are using gluPerspective() on the Projection matrix stack with zNear and zFar as the third and fourth parameters, you need to set gluLookAt on the ModelView matrix stack, and pass parameters so your geometry falls between zNear and zFar.<br />
<br />
It's usually best to experiment with a simple piece of code when you're trying to understand viewing transformations. Let's say you are trying to look at a unit sphere centered on the origin. You'll want to set up your transformations as follows:<br />
<br />
<pre> glMatrixMode(GL_PROJECTION);<br />
glLoadIdentity();<br />
gluPerspective(50.0, 1.0, 3.0, 7.0);<br />
glMatrixMode(GL_MODELVIEW);<br />
glLoadIdentity();<br />
gluLookAt(0.0, 0.0, 5.0,<br />
0.0, 0.0, 0.0,<br />
0.0, 1.0, 0.0);</pre><br />
<br />
It's important to note how the Projection and ModelView transforms work together.<br />
<br />
In this example, the Projection transform sets up a 50.0-degree field of view, with an aspect ratio of 1.0. The zNear clipping plane is 3.0 units in front of the eye, and the zFar clipping plane is 7.0 units in front of the eye. This leaves a Z volume distance of 4.0 units, ample room for a unit sphere.<br />
<br />
The ModelView transform sets the eye position at (0.0, 0.0, 5.0), and the look-at point is the origin in the center of our unit sphere. Note that the eye position is 5.0 units away from the look at point. This is important, because a distance of 5.0 units in front of the eye is in the middle of the Z volume that the Projection transform defines. If the gluLookAt() call had placed the eye at (0.0, 0.0, 1.0), it would produce a distance of 1.0 to the origin. This isn't long enough to include the sphere in the view volume, and it would be clipped by the zNear clipping plane.<br />
<br />
Similarly, if you place the eye at (0.0, 0.0, 10.0), the distance of 10.0 to the look at point will result in the unit sphere being 10.0 units away from the eye and far behind the zFar clipping plane placed at 7.0 units.<br />
<br />
If this has confused you, read up on transformations in the OpenGL red book or OpenGL Specification. After you understand object coordinate space, eye coordinate space, and clip coordinate space, the above should become clear. Also, experiment with small test programs. If you're having trouble getting the correct transforms in your main application project, it can be educational to write a small piece of code that tries to reproduce the problem with simpler geometry.<br />
<br />
===== How do I get a specified point (XYZ) to appear at the center of the scene? =====<br />
<br />
gluLookAt() is the easiest way to do this. Simply set the X, Y, and Z values of your point as the fourth, fifth, and sixth parameters to gluLookAt().<br />
<br />
===== I put my gluLookAt() call on my Projection matrix and now fog, lighting, and texture mapping don't work correctly. What happened? =====<br />
<br />
Look at question 3 for an explanation of this problem.<br />
<br />
===== How can I create a stereo view? =====<br />
<br />
Paul Bourke has assembled information on stereo OpenGL viewing.<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/stereogl/ 3D Stereo Rendering Using OpenGL]<br />
* [http://www.swin.edu.au/astronomy/pbourke/stereographics/stereorender/ Calculating Stereo Pairs]<br />
* [http://www.swin.edu.au/astronomy/pbourke/opengl/redblue/ Creating Anaglyphs using OpenGL]</div>Dorbie