https://www.khronos.org/opengl/wiki_opengl/api.php?action=feedcontributions&user=Thestr4ng3r&feedformat=atomOpenGL Wiki - User contributions [en]2022-05-27T07:22:20ZUser contributionsMediaWiki 1.35.5https://www.khronos.org/opengl/wiki_opengl/index.php?title=Compute_eye_space_from_window_space&diff=12741Compute eye space from window space2015-09-17T12:14:50Z<p>Thestr4ng3r: corrected code sample</p>
<hr />
<div>This page will explain how to recompute eye-space vertex positions given window-space vertex positions. This will be shown for several cases.<br />
<br />
== Definitions ==<br />
<br />
Before we begin, we need to define some symbols:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Symbol<br />
! Meaning<br />
|-<br />
| M<br />
| The projection matrix<br />
|-<br />
| P<br />
| The eye-space position, 4D vector<br />
|-<br />
| C<br />
| The clip-space position, 4D vector<br />
|-<br />
| N<br />
| The normalized device coordinate space position, 3D vector<br />
|-<br />
| W<br />
| The window-space position, 3D vector<br />
|-<br />
| V<sub>x, y</sub><br />
| The X and Y values passed to {{apifunc|glViewport}}<br />
|-<br />
| V<sub>w, h</sub> <br />
| The width and height values passed to {{apifunc|glViewport}}<br />
|-<br />
| D<sub>n, f</sub><br />
| The near and far values passed to {{apifunc|glDepthRange}}<br />
|}<br />
<br />
== From gl_FragCoord ==<br />
<br />
{{code|gl_FragCoord.xyz}} is the window-space position W, a 3D vector quantity. {{code|gl_FragCoord.w}} contains the inverse of the clip-space W: <math>gl\_FragCoord_w = \tfrac{1}{C_w}</math>.<br />
<br />
Given these values, we have a fairly simple system of equations:<br />
<br />
<math><br />
\begin{align}<br />
\vec N & =<br />
\begin{bmatrix}<br />
\tfrac{(2 * W_x) - (2 * V_x)}{V_w} - 1\\<br />
\tfrac{(2 * W_y) - (2 * V_y)}{V_h} - 1\\<br />
\tfrac{(2 * W_z) - D_f - D_n}{D_f - D_n}<br />
\end{bmatrix}\\<br />
\vec C_{xyz} & = \frac{\vec N}{gl\_FragCoord_w}\\<br />
C_{w} & = \frac{1}{gl\_FragCoord_w}\\<br />
\vec P &= M^{-1}\vec C<br />
\end{align}<br />
</math><br />
<br />
In a GLSL fragment shader, the code would be as follows:<br />
<br />
<source lang="glsl"><br />
vec4 ndcPos;<br />
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;<br />
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /<br />
(gl_DepthRange.far - gl_DepthRange.near);<br />
ndcPos.w = 1.0;<br />
<br />
vec4 clipPos = ndcPos / gl_FragCoord.w;<br />
vec4 eyePos = invPersMatrix * clipPos;<br />
</source><br />
<br />
This assumes the presence of a uniform called {{code|viewport}}, which is a {{code|vec4}}, matching the parameters to {{apifunc|glViewport}}, in the order passed to that function. Also, this assumes that {{code|invPersMatrix}} is the inverse of the perspective projection matrix (it is a really bad idea to compute this in the fragment shader). Note that {{code|gl_DepthRange}} is a built-in variable available to the fragment shader.<br />
<br />
== From XYZ of gl_FragCoord ==<br />
<br />
This case is mostly useful for [http://en.wikipedia.org/wiki/Deferred_shading deferred rendering techniques], but the last method is also very useful. In deferred rendering, we render the material parameters of our objects to images. Then, we make several passes over these images, loading those material parameters and performing lighting computations on them.<br />
<br />
In the light pass, we need to reconstruct the eye-space vertex position in order to do lighting. However, we do not actually ''have'' {{code|gl_FragCoord}}; not for the fragment that produced the material parameters. Instead, we have the window-space X and Y position, from {{code|gl_FragCoord.xy}}, and we have the window-space depth, sampled by accessing the depth buffer, which was also saved from the deferred pass.<br />
<br />
What we are missing is the original window-space '''W''' coordinate.<br />
<br />
Therefore, we must find a way to compute it from the window-space XYZ coordinate and the perspective projection matrix. This discussion will assume your perspective projection matrix is of the following form:<br />
<br />
[ xx xx xx xx ]<br />
[ xx xx xx xx ]<br />
[ 0 0 T1 T2 ]<br />
[ 0 0 E1 0 ]<br />
<br />
The {{code|xx}} mean "anything;" they can be any value you use in your projection. The 0's must be zeros in your projection matrix. {{code|T1}}, {{code|T2}}, and {{code|E1}} can be any arbitrary terms, depending on how your projection matrix works.<br />
<br />
If your projection matrix does not fit this form, then the following code will get a lot more complicated.<br />
<br />
=== From window to ndc ===<br />
<br />
We have the XYZ of window space:<br />
<br />
<math><br />
\vec W = <br />
\begin{bmatrix}<br />
gl\_FragCoord.x\\<br />
gl\_FragCoord.y\\<br />
fromDepthTexture<br />
\end{bmatrix}<br />
</math><br />
<br />
Computing the NDC space from window space is the same as the above:<br />
<br />
<math><br />
\vec N =<br />
\begin{bmatrix}<br />
\tfrac{(2 * W_x) - (2 * V_x)}{V_w} - 1\\<br />
\tfrac{(2 * W_y) - (2 * V_y)}{V_h} - 1\\<br />
\tfrac{(2 * W_z) - D_f - D_n}{D_f - D_n} - 1<br />
\end{bmatrix}<br />
</math><br />
<br />
Just remember: the viewport and depth range parameters are, in this case, the parameters that were used to render the ''original scene''. The viewport should not have changed of course, but the depth range certainly could (assuming you even have a depth range in the lighting pass of a deferred renderer).<br />
<br />
=== From NDC to clip ===<br />
<br />
For the sake of simplicity, here are the equations for going from NDC space to clip space:<br />
<br />
<math><br />
\begin{align}<br />
C_w & = \tfrac{T2}{N_z - \tfrac{T1}{E1}}\\<br />
\vec C_{xyz} & = \vec N * C_w<br />
\end{align}<br />
</math><br />
<br />
==== Derivation ====<br />
<br />
Deriving those two equiations is very non-trivial; it's a pretty big stumbling block. Let's start with what we know.<br />
<br />
We can convert from clip space to NDC space, so we can go back:<br />
<br />
<math><br />
\begin{align}<br />
\vec N & = \tfrac{\vec C}{C_w}\\<br />
\vec C & = \vec N * C_w<br />
\end{align}<br />
</math><br />
<br />
The problem is that we don't have C<sub>w</sub>. We were able to use {{code|gl_FragCoord.w}} to compute it before, but that's not available when we're doing this after the fact in a deferred lighting pass.<br />
<br />
So how do we compute it? Well, we know that the clip space position was originally computed like this:<br />
<br />
<math><br />
\vec C = M * \vec P<br />
</math><br />
<br />
Therefore, we know that C<sub>w</sub> was computed by the dot-product of P with the fourth row of M. And given our above definition of the fourth row of M, we can conclude:<br />
<br />
<math><br />
\begin{align}<br />
C_w & = E1 * P_z\\<br />
\vec N & = \tfrac{\vec C}{E1 * P_z}<br />
\end{align}<br />
</math><br />
<br />
Of course, this just trades one unknown for another. But we can use this. It turns out that N<sub>z</sub> has something in common with this:<br />
<br />
<math><br />
N_z = \tfrac{C_z}{E1 * P_z}<br />
</math><br />
<br />
It's interesting to look at where C<sub>z</sub> comes from. As before, we know that it was computed by the dot-product of P with the ''third'' row of M. And again, given our above definition for M, we can conclude:<br />
<br />
<math><br />
\begin{align}<br />
C_z & = T1 * P_z + T2 * P_w\\<br />
N_z & = \tfrac{T1 * P_z + T2 * P_w}{E1 * P_z}<br />
\end{align}<br />
</math><br />
<br />
We still have two unknown values here, P<sub>z</sub> and P<sub>w</sub>. However, we can assume that P<sub>w</sub> is 1.0, as this is usually the case for eye space positions. Given that assumption, we only have one unknown, P<sub>z</sub>, which we can solve for:<br />
<br />
<math><br />
\begin{align}<br />
P_w & = 1.0\\<br />
N_z & = \tfrac{T1 * P_z + T2}{E1 * P_z}\\<br />
N_z & = \tfrac{T1}{E1} + \tfrac{T2}{E1 * P_z}\\<br />
N_z - \tfrac{T1}{E1} & = \tfrac{T2}{E1 * P_z}\\<br />
E1 * P_z & = \tfrac{T2}{N_z - \tfrac{T1}{E1}}\\<br />
P_z & = \tfrac{T2}{E1 * (N_z - \tfrac{T1}{E1})}\\<br />
P_z & = \tfrac{T2}{E1 * N_z - T1}<br />
\end{align}<br />
</math><br />
<br />
Now armed with P<sub>z</sub>, we can compute C<sub>w</sub>:<br />
<br />
<math><br />
\begin{align}<br />
C_w & = E1 * P_z\\<br />
C_w & = \tfrac{T2}{N_z - \tfrac{T1}{E1}}<br />
\end{align}<br />
</math><br />
<br />
And thus, we can compute the rest of C from this:<br />
<br />
<math><br />
\begin{align}<br />
\vec C_{xyz} & = \vec N * C_w\\<br />
\vec C_{xyz} & = \vec N * (\tfrac{T2}{N_z - \tfrac{T1}{E1}})<br />
\end{align}<br />
</math><br />
<br />
=== From clip to eye ===<br />
<br />
With the full 4D vector C computed, we can compute P just as before:<br />
<br />
<math><br />
\vec P = M^{-1}\vec C<br />
</math><br />
<br />
=== GLSL example ===<br />
<br />
Here is some GLSL sample code for what this would look like:<br />
<br />
<source lang="glsl"><br />
uniform mat4 persMatrix;<br />
uniform mat4 invPersMatrix;<br />
uniform vec4 viewport;<br />
uniform vec2 depthrange;<br />
<br />
vec4 CalcEyeFromWindow(in vec3 windowSpace)<br />
{<br />
vec3 ndcPos;<br />
ndcPos.xy = ((2.0 * windowSpace.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;<br />
ndcPos.z = (2.0 * windowSpace.z - depthrange.x - depthrange.y) /<br />
(depthrange.y - depthrange.x);<br />
<br />
vec4 clipPos;<br />
clipPos.w = persMatrix[3][2] / (ndcPos.z - (persMatrix[2][2] / persMatrix[2][3]));<br />
clipPos.xyz = ndcPos * clipPos.w;<br />
<br />
return invPersMatrix * clipPos;<br />
}<br />
</source><br />
<br />
{{code|viewport}} is a vector containing the viewport parameters. {{code|depthrange}} is a 2D vector containing the {{apifunc|glDepthRange}} parameters. The {{code|windowSpace}} vector is the first two components of {{code|gl_FragCoord}}, with the third coordinate being the depth read from the depth buffer.<br />
<br />
== Optimized method from XYZ of gl_FragCoord ==<br />
<br />
The previous method is certainly useful, but it's a bit slow. We can significantly aid the computation of the eye-space position by using the vertex shader to provide assistance. This allows us to avoid the use of the inverse perspective matrix entirely.<br />
<br />
This method is a two step process. We first compute P<sub>z</sub>, the eye-space Z coordinate. Then use that to compute the full eye-space position.<br />
<br />
The first part is actually quite easy. Most of the computations we used above were necessary because we needed C<sub>w</sub>, which we had to do since we needed a full clip-space position. This optimized method only needs to get P<sub>z</sub>, which we can compute directly from W<sub>z</sub>, the depth range, and the three components of the projection matrix:<br />
<br />
<math><br />
\begin{align}<br />
N_z & = \tfrac{(2 * W_z) - D_f - D_n}{D_f - D_n} - 1\\<br />
P_z & = \tfrac{T2}{E1 * N_z - T1}<br />
\end{align}<br />
</math><br />
<br />
Note that this also means that we don't need the viewport settings in the fragment shader. We only need the depth range and the perspective matrix terms.<br />
<br />
The trick to this method is what follows: how we go from P<sub>z</sub> to the full eye-space position P. To understand how this works, here's a quick bit of geometry:<br />
<br />
[[Image:SimilarTriangle.png]]<br />
<br />
The E in the diagram represents the eye position, which is the origin in eye-space. P is the position we want, and P<sub>z</sub> is what we have. So, what do we need to get P from P<sub>z</sub>? All we need is a vector direction that points towards P, but has a z component of 1.0. With that, we just multiply that vector by P<sub>z</sub>; the result will necessarily be P.<br />
<br />
So how do we get this vector?<br />
<br />
That's where the vertex shader comes in. In [http://en.wikipedia.org/wiki/Deferred_shading deferred rendering], the [[Vertex Shader]] is often a simple pass-through shader, performing no actual computation and passing no user-defined outputs. So we are free to use it for something.<br />
<br />
In the vertex shader, we simply construct a vector from the origin towards each vertex coordinate in eye space, that is the corresponding position on the near plane. We set the Z-distance of the vector to be -1.0. This constructs a vector that points into the scene in front of the camera in eye space. The extents of the near plane can easily be calculated from fovy and aspect ratio.<br />
<br />
Linear interpolation of this value will make sure that every vector computed for a fragment will have a Z-value of -1.0. And linear interpolation will also guarantee that it points directly towards the fragment generated.<br />
<br />
We could have computed this in the fragment shader, but why bother? That would require providing the viewport transform to the fragment shader(so that we can transform W<sub>xy</sub> to eye space). And it's not like the VS is ''doing'' anything...<br />
<br />
Once we have the value, we simply multiply it by P<sub>z</sub> to get our eye-space position P.<br />
<br />
Here is some shader code.<br />
<br />
<source lang="glsl"><br />
//Vertex shader<br />
//Half the size of the near plane {y * aspect, tan(fovy/2.0) }<br />
uniform vec2 halfSizeNearPlane; <br />
<br />
layout (location=0) in vec2 clipPos;<br />
//UV for the depth buffer/screen access.<br />
//(0,0) in bottom left corner (1, 1) in top right corner<br />
layout (location=1) in vec2 texCoord;<br />
<br />
out vec3 eyeDirection;<br />
out vec2 uv;<br />
<br />
void main()<br />
{<br />
uv = texCoord;<br />
<br />
eye_direction = vec3((2.0 * halfSizeNearPlane * texCoord) - halfSizeNearPlane , -1.0);<br />
gl_Position = vec4(clipPos, 0, 1);<br />
}<br />
<br />
//Fragment shader<br />
in vec3 eyeDirection;<br />
in vec2 uv;<br />
<br />
uniform mat4 persMatrix;<br />
uniform vec2 depthrange;<br />
<br />
uniform sampler2D depthTex;<br />
<br />
vec4 CalcEyeFromWindow(in float windowZ, in vec3 eyeDirection)<br />
{<br />
float ndcZ = (2.0 * windowZ - depthrange.x - depthrange.y) /<br />
(depthrange.y - depthrange.x);<br />
float eyeZ = persMatrix[3][2] / ((persMatrix[2][3] * ndcZ) - persMatrix[2][2]);<br />
return vec4(eyeDirection * eyeZ, 1);<br />
}<br />
<br />
void main()<br />
{<br />
vec4 eyeSpace = CalcEyeFromWindow(texture(depthTex, uv), eyeDirection);<br />
}<br />
<br />
</source><br />
<br />
== References ==<br />
<br />
* [http://www.leadwerks.com/files/Deferred_Rendering_in_Leadwerks_Engine.pdf Deferred Rendering in Leadwerks Engine (PDF)]<br />
<br />
<br />
[[Category:Algorithm]]</div>Thestr4ng3rhttps://www.khronos.org/opengl/wiki_opengl/index.php?title=Compute_eye_space_from_window_space&diff=12740Compute eye space from window space2015-09-17T11:03:01Z<p>Thestr4ng3r: typo</p>
<hr />
<div>This page will explain how to recompute eye-space vertex positions given window-space vertex positions. This will be shown for several cases.<br />
<br />
== Definitions ==<br />
<br />
Before we begin, we need to define some symbols:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Symbol<br />
! Meaning<br />
|-<br />
| M<br />
| The projection matrix<br />
|-<br />
| P<br />
| The eye-space position, 4D vector<br />
|-<br />
| C<br />
| The clip-space position, 4D vector<br />
|-<br />
| N<br />
| The normalized device coordinate space position, 3D vector<br />
|-<br />
| W<br />
| The window-space position, 3D vector<br />
|-<br />
| V<sub>x, y</sub><br />
| The X and Y values passed to {{apifunc|glViewport}}<br />
|-<br />
| V<sub>w, h</sub> <br />
| The width and height values passed to {{apifunc|glViewport}}<br />
|-<br />
| D<sub>n, f</sub><br />
| The near and far values passed to {{apifunc|glDepthRange}}<br />
|}<br />
<br />
== From gl_FragCoord ==<br />
<br />
{{code|gl_FragCoord.xyz}} is the window-space position W, a 3D vector quantity. {{code|gl_FragCoord.w}} contains the inverse of the clip-space W: <math>gl\_FragCoord_w = \tfrac{1}{C_w}</math>.<br />
<br />
Given these values, we have a fairly simple system of equations:<br />
<br />
<math><br />
\begin{align}<br />
\vec N & =<br />
\begin{bmatrix}<br />
\tfrac{(2 * W_x) - (2 * V_x)}{V_w} - 1\\<br />
\tfrac{(2 * W_y) - (2 * V_y)}{V_h} - 1\\<br />
\tfrac{(2 * W_z) - D_f - D_n}{D_f - D_n}<br />
\end{bmatrix}\\<br />
\vec C_{xyz} & = \frac{\vec N}{gl\_FragCoord_w}\\<br />
C_{w} & = \frac{1}{gl\_FragCoord_w}\\<br />
\vec P &= M^{-1}\vec C<br />
\end{align}<br />
</math><br />
<br />
In a GLSL fragment shader, the code would be as follows:<br />
<br />
<source lang="glsl"><br />
vec4 ndcPos;<br />
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;<br />
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /<br />
(gl_DepthRange.far - gl_DepthRange.near);<br />
ndcPos.w = 1.0;<br />
<br />
vec4 clipPos = ndcPos / gl_FragCoord.w;<br />
vec4 eyePos = invPersMatrix * clipPos;<br />
</source><br />
<br />
This assumes the presence of a uniform called {{code|viewport}}, which is a {{code|vec4}}, matching the parameters to {{apifunc|glViewport}}, in the order passed to that function. Also, this assumes that {{code|invPersMatrix}} is the inverse of the perspective projection matrix (it is a really bad idea to compute this in the fragment shader). Note that {{code|gl_DepthRange}} is a built-in variable available to the fragment shader.<br />
<br />
== From XYZ of gl_FragCoord ==<br />
<br />
This case is mostly useful for [http://en.wikipedia.org/wiki/Deferred_shading deferred rendering techniques], but the last method is also very useful. In deferred rendering, we render the material parameters of our objects to images. Then, we make several passes over these images, loading those material parameters and performing lighting computations on them.<br />
<br />
In the light pass, we need to reconstruct the eye-space vertex position in order to do lighting. However, we do not actually ''have'' {{code|gl_FragCoord}}; not for the fragment that produced the material parameters. Instead, we have the window-space X and Y position, from {{code|gl_FragCoord.xy}}, and we have the window-space depth, sampled by accessing the depth buffer, which was also saved from the deferred pass.<br />
<br />
What we are missing is the original window-space '''W''' coordinate.<br />
<br />
Therefore, we must find a way to compute it from the window-space XYZ coordinate and the perspective projection matrix. This discussion will assume your perspective projection matrix is of the following form:<br />
<br />
[ xx xx xx xx ]<br />
[ xx xx xx xx ]<br />
[ 0 0 T1 T2 ]<br />
[ 0 0 E1 0 ]<br />
<br />
The {{code|xx}} mean "anything;" they can be any value you use in your projection. The 0's must be zeros in your projection matrix. {{code|T1}}, {{code|T2}}, and {{code|E1}} can be any arbitrary terms, depending on how your projection matrix works.<br />
<br />
If your projection matrix does not fit this form, then the following code will get a lot more complicated.<br />
<br />
=== From window to ndc ===<br />
<br />
We have the XYZ of window space:<br />
<br />
<math><br />
\vec W = <br />
\begin{bmatrix}<br />
gl\_FragCoord.x\\<br />
gl\_FragCoord.y\\<br />
fromDepthTexture<br />
\end{bmatrix}<br />
</math><br />
<br />
Computing the NDC space from window space is the same as the above:<br />
<br />
<math><br />
\vec N =<br />
\begin{bmatrix}<br />
\tfrac{(2 * W_x) - (2 * V_x)}{V_w} - 1\\<br />
\tfrac{(2 * W_y) - (2 * V_y)}{V_h} - 1\\<br />
\tfrac{(2 * W_z) - D_f - D_n}{D_f - D_n} - 1<br />
\end{bmatrix}<br />
</math><br />
<br />
Just remember: the viewport and depth range parameters are, in this case, the parameters that were used to render the ''original scene''. The viewport should not have changed of course, but the depth range certainly could (assuming you even have a depth range in the lighting pass of a deferred renderer).<br />
<br />
=== From NDC to clip ===<br />
<br />
For the sake of simplicity, here are the equations for going from NDC space to clip space:<br />
<br />
<math><br />
\begin{align}<br />
C_w & = \tfrac{T2}{N_z - \tfrac{T1}{E1}}\\<br />
\vec C_{xyz} & = \vec N * C_w<br />
\end{align}<br />
</math><br />
<br />
==== Derivation ====<br />
<br />
Deriving those two equiations is very non-trivial; it's a pretty big stumbling block. Let's start with what we know.<br />
<br />
We can convert from clip space to NDC space, so we can go back:<br />
<br />
<math><br />
\begin{align}<br />
\vec N & = \tfrac{\vec C}{C_w}\\<br />
\vec C & = \vec N * C_w<br />
\end{align}<br />
</math><br />
<br />
The problem is that we don't have C<sub>w</sub>. We were able to use {{code|gl_FragCoord.w}} to compute it before, but that's not available when we're doing this after the fact in a deferred lighting pass.<br />
<br />
So how do we compute it? Well, we know that the clip space position was originally computed like this:<br />
<br />
<math><br />
\vec C = M * \vec P<br />
</math><br />
<br />
Therefore, we know that C<sub>w</sub> was computed by the dot-product of P with the fourth row of M. And given our above definition of the fourth row of M, we can conclude:<br />
<br />
<math><br />
\begin{align}<br />
C_w & = E1 * P_z\\<br />
\vec N & = \tfrac{\vec C}{E1 * P_z}<br />
\end{align}<br />
</math><br />
<br />
Of course, this just trades one unknown for another. But we can use this. It turns out that N<sub>z</sub> has something in common with this:<br />
<br />
<math><br />
N_z = \tfrac{C_z}{E1 * P_z}<br />
</math><br />
<br />
It's interesting to look at where C<sub>z</sub> comes from. As before, we know that it was computed by the dot-product of P with the ''third'' row of M. And again, given our above definition for M, we can conclude:<br />
<br />
<math><br />
\begin{align}<br />
C_z & = T1 * P_z + T2 * P_w\\<br />
N_z & = \tfrac{T1 * P_z + T2 * P_w}{E1 * P_z}<br />
\end{align}<br />
</math><br />
<br />
We still have two unknown values here, P<sub>z</sub> and P<sub>w</sub>. However, we can assume that P<sub>w</sub> is 1.0, as this is usually the case for eye space positions. Given that assumption, we only have one unknown, P<sub>z</sub>, which we can solve for:<br />
<br />
<math><br />
\begin{align}<br />
P_w & = 1.0\\<br />
N_z & = \tfrac{T1 * P_z + T2}{E1 * P_z}\\<br />
N_z & = \tfrac{T1}{E1} + \tfrac{T2}{E1 * P_z}\\<br />
N_z - \tfrac{T1}{E1} & = \tfrac{T2}{E1 * P_z}\\<br />
E1 * P_z & = \tfrac{T2}{N_z - \tfrac{T1}{E1}}\\<br />
P_z & = \tfrac{T2}{E1 * (N_z - \tfrac{T1}{E1})}\\<br />
P_z & = \tfrac{T2}{E1 * N_z - T1}<br />
\end{align}<br />
</math><br />
<br />
Now armed with P<sub>z</sub>, we can compute C<sub>w</sub>:<br />
<br />
<math><br />
\begin{align}<br />
C_w & = E1 * P_z\\<br />
C_w & = \tfrac{T2}{N_z - \tfrac{T1}{E1}}<br />
\end{align}<br />
</math><br />
<br />
And thus, we can compute the rest of C from this:<br />
<br />
<math><br />
\begin{align}<br />
\vec C_{xyz} & = \vec N * C_w\\<br />
\vec C_{xyz} & = \vec N * (\tfrac{T2}{N_z - \tfrac{T1}{E1}})<br />
\end{align}<br />
</math><br />
<br />
=== From clip to eye ===<br />
<br />
With the full 4D vector C computed, we can compute P just as before:<br />
<br />
<math><br />
\vec P = M^{-1}\vec C<br />
</math><br />
<br />
=== GLSL example ===<br />
<br />
Here is some GLSL sample code for what this would look like:<br />
<br />
<source lang="glsl"><br />
uniform mat4 persMatrix;<br />
uniform mat4 invPersMatrix;<br />
uniform vec4 viewport;<br />
uniform vec2 depthrange;<br />
<br />
vec4 CalcEyeFromWindow(in vec3 windowSpace)<br />
{<br />
vec3 ndcPos;<br />
ndcPos.xy = ((2.0 * windowSpace.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;<br />
ndcPos.z = (2.0 * windowSpace.z - depthrange.x - depthrange.y) /<br />
(depthrange.y - depthrange.x);<br />
<br />
vec4 clipPos;<br />
clipPos.w = persMatrix[3][2] / (ndcPos.z - (persMatrix[2][2] / persMatrix[2][3]));<br />
clipPos.xyz = ndcPos * clipPos.w;<br />
<br />
vec4 eyePos = invPersMatrix * clipPos;<br />
}<br />
</source><br />
<br />
{{code|viewport}} is a vector containing the viewport parameters. {{code|depthrange}} is a 2D vector containing the {{apifunc|glDepthRange}} parameters. The {{code|windowSpace}} vector is the first two components of {{code|gl_FragCoord}}, with the third coordinate being the depth read from the depth buffer.<br />
<br />
== Optimized method from XYZ of gl_FragCoord ==<br />
<br />
The previous method is certainly useful, but it's a bit slow. We can significantly aid the computation of the eye-space position by using the vertex shader to provide assistance. This allows us to avoid the use of the inverse perspective matrix entirely.<br />
<br />
This method is a two step process. We first compute P<sub>z</sub>, the eye-space Z coordinate. Then use that to compute the full eye-space position.<br />
<br />
The first part is actually quite easy. Most of the computations we used above were necessary because we needed C<sub>w</sub>, which we had to do since we needed a full clip-space position. This optimized method only needs to get P<sub>z</sub>, which we can compute directly from W<sub>z</sub>, the depth range, and the three components of the projection matrix:<br />
<br />
<math><br />
\begin{align}<br />
N_z & = \tfrac{(2 * W_z) - D_f - D_n}{D_f - D_n} - 1\\<br />
P_z & = \tfrac{T2}{E1 * N_z - T1}<br />
\end{align}<br />
</math><br />
<br />
Note that this also means that we don't need the viewport settings in the fragment shader. We only need the depth range and the perspective matrix terms.<br />
<br />
The trick to this method is what follows: how we go from P<sub>z</sub> to the full eye-space position P. To understand how this works, here's a quick bit of geometry:<br />
<br />
[[Image:SimilarTriangle.png]]<br />
<br />
The E in the diagram represents the eye position, which is the origin in eye-space. P is the position we want, and P<sub>z</sub> is what we have. So, what do we need to get P from P<sub>z</sub>? All we need is a vector direction that points towards P, but has a z component of 1.0. With that, we just multiply that vector by P<sub>z</sub>; the result will necessarily be P.<br />
<br />
So how do we get this vector?<br />
<br />
That's where the vertex shader comes in. In [http://en.wikipedia.org/wiki/Deferred_shading deferred rendering], the [[Vertex Shader]] is often a simple pass-through shader, performing no actual computation and passing no user-defined outputs. So we are free to use it for something.<br />
<br />
In the vertex shader, we simply construct a vector from the origin towards each vertex coordinate in eye space, that is the corresponding position on the near plane. We set the Z-distance of the vector to be -1.0. This constructs a vector that points into the scene in front of the camera in eye space. The extents of the near plane can easily be calculated from fovy and aspect ratio.<br />
<br />
Linear interpolation of this value will make sure that every vector computed for a fragment will have a Z-value of -1.0. And linear interpolation will also guarantee that it points directly towards the fragment generated.<br />
<br />
We could have computed this in the fragment shader, but why bother? That would require providing the viewport transform to the fragment shader(so that we can transform W<sub>xy</sub> to eye space). And it's not like the VS is ''doing'' anything...<br />
<br />
Once we have the value, we simply multiply it by P<sub>z</sub> to get our eye-space position P.<br />
<br />
Here is some shader code.<br />
<br />
<source lang="glsl"><br />
//Vertex shader<br />
//Half the size of the near plane {y * aspect, tan(fovy/2.0) }<br />
uniform vec2 halfSizeNearPlane; <br />
<br />
layout (location=0) in vec2 clipPos;<br />
//UV for the depth buffer/screen access.<br />
//(0,0) in bottom left corner (1, 1) in top right corner<br />
layout (location=1) in vec2 texCoord;<br />
<br />
out vec3 eyeDirection;<br />
out vec2 uv;<br />
<br />
void main()<br />
{<br />
uv = texCoord;<br />
<br />
eye_direction = vec3((2.0 * halfSizeNearPlane * texCoord) - halfSizeNearPlane , -1.0);<br />
gl_Position = vec4(clipPos, 0, 1);<br />
}<br />
<br />
//Fragment shader<br />
in vec3 eyeDirection;<br />
in vec2 uv;<br />
<br />
uniform mat4 persMatrix;<br />
uniform vec2 depthrange;<br />
<br />
uniform sampler2D depthTex;<br />
<br />
vec4 CalcEyeFromWindow(in float windowZ, in vec3 eyeDirection)<br />
{<br />
float ndcZ = (2.0 * windowZ - depthrange.x - depthrange.y) /<br />
(depthrange.y - depthrange.x);<br />
float eyeZ = persMatrix[3][2] / ((persMatrix[2][3] * ndcZ) - persMatrix[2][2]);<br />
return vec4(eyeDirection * eyeZ, 1);<br />
}<br />
<br />
void main()<br />
{<br />
vec4 eyeSpace = CalcEyeFromWindow(texture(depthTex, uv), eyeDirection);<br />
}<br />
<br />
</source><br />
<br />
== References ==<br />
<br />
* [http://www.leadwerks.com/files/Deferred_Rendering_in_Leadwerks_Engine.pdf Deferred Rendering in Leadwerks Engine (PDF)]<br />
<br />
<br />
[[Category:Algorithm]]</div>Thestr4ng3r