OpenGL:从 GLSL 中的窗口空间坐标计算眼空间坐标?

发布于 2024-11-01 15:12:47 字数 77 浏览 2 评论 0原文

如何从窗口空间(帧缓冲区中的像素)坐标+ GLSL 中的像素深度值(可以这么说,GLSL 中的 gluUnproject)计算眼睛空间坐标?

How do I compute an eye space coordinate from window space (pixel in the frame buffer) coordinates + pixel depth value in GLSL please (gluUnproject in GLSL so to speak)?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

凉宸 2024-11-08 15:12:47

看起来与 GLSL 将 gl_FragCoord.z 转换为眼睛空间 z 重复。

编辑(完整答案):

// input: x_coord, y_coord, samplerDepth
vec2 xy = vec2(x_coord,y_coord); //in [0,1] range
vec4 v_screen = vec4(xy, texture(samplerDepth,xy), 1.0 );
vec4 v_homo = inverse(gl_ProjectionMatrix) * 2.0*(v_screen-vec4(0.5));
vec3 v_eye = v_homo.xyz / v_homo.w; //transfer from homogeneous coordinates

Looks to be duplicate of GLSL convert gl_FragCoord.z into eye-space z.

Edit (complete answer):

// input: x_coord, y_coord, samplerDepth
vec2 xy = vec2(x_coord,y_coord); //in [0,1] range
vec4 v_screen = vec4(xy, texture(samplerDepth,xy), 1.0 );
vec4 v_homo = inverse(gl_ProjectionMatrix) * 2.0*(v_screen-vec4(0.5));
vec3 v_eye = v_homo.xyz / v_homo.w; //transfer from homogeneous coordinates
当梦初醒 2024-11-08 15:12:47

假设您坚持使用固定的管道式模型、视图和投影,您可以准确地实现 gluUnProject 手册页

GLSL 中没有内置矩阵求逆,因此理想情况下您应该在 CPU 上进行求逆。因此,您需要提供组成的 modelViewProjection 矩阵的逆矩阵的统一。 gl_FragCoord 在窗口坐标中,因此您还需要提供视图尺寸。

因此,您可能最终会得到类似的结果(即席编码):

vec4 unProjectedPosition = invertedModelViewProjection * vec4( 
               2.0 * (gl_FragCoord.x - view[0]) / view[2] - 1.0, 
               2.0 * (gl_FragCoord.y - view[1]) / view[3] - 1.0,
               2.0 * gl_FragCoord.z - 1.0,
               1.0);

如果您已经实现了自己的旧矩阵堆栈的类似物,那么您可能可以很好地反转矩阵。否则,这可能是一个比您预期的更令人畏惧的主题,而且您可能会做得更好关闭使用 MESA 的开源实现 (请参阅 invert_matrix,该文件中的第三个函数),只是因为它已经过良好的测试(如果没有其他的话)。

Assuming you've stuck with a fixed pipeline-style model, view and projection, you can just implement exactly the formula given in the gluUnProject man page.

There's no matrix inversion built into GLSL, so ideally you'd so that on the CPU. So you need to supply a uniform of the inverse of your composed modelViewProjection matrix. gl_FragCoord is in window coordinates, so you also need to supply the view dimensions.

So, you'd probably end up with something like (coding extemporaneously):

vec4 unProjectedPosition = invertedModelViewProjection * vec4( 
               2.0 * (gl_FragCoord.x - view[0]) / view[2] - 1.0, 
               2.0 * (gl_FragCoord.y - view[1]) / view[3] - 1.0,
               2.0 * gl_FragCoord.z - 1.0,
               1.0);

If you've implemented your own analogue of the old matrix stack then you're probably fine inverting a matrix. Otherwise, it's possibly a more daunting topic than you had anticipated and you might be better off using MESA's open source implementation (see invert_matrix, the third function in that file), just because it's well tested if nothing else.

誰ツ都不明白 2024-11-08 15:12:47

好吧,opengl.org 上的一个人指出,投影产生的剪辑空间坐标除以 ClipPos.w 来计算标准化设备坐标。当从 ndc 上的片段反转到剪辑空间坐标时,您需要重建 w (恰好是相应视图空间(相机)坐标的 -z ),并将 ndc 坐标与该值相乘以计算正确的剪辑空间坐标(您可以通过将其与逆投影矩阵相乘将其转换为视图空间坐标)。

以下代码假设您正在后处理中处理帧缓冲区。在渲染几何体时处理它时,可以使用gl_FragCoord.z而不是texture2D(sceneDepth,ndcPos.xy).r。

代码如下:

uniform sampler2D sceneDepth;
uniform mat4 projectionInverse;
uniform vec2 clipPlanes; // zNear, zFar
uniform vec2 windowSize; // window width, height

#define ZNEAR clipPlanes.x
#define ZFAR clipPlanes.y

#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE -(C / (A + D))

void main() 
{
vec3 ndcPos;
ndcPos.xy = gl_FragCoord.xy / windowSize;
ndcPos.z = texture2D (sceneDepth, ndcPos.xy).r; // or gl_FragCoord.z
ndcPos -= 0.5;
ndcPos *= 2.0;
vec4 clipPos;
clipPos.w = -ZEYE;
clipPos.xyz = ndcPos * clipPos.w;
vec4 eyePos = projectionInverse * clipPos;
}

基本上这是 gluUnproject 的 GLSL 版本。

Well, a guy on opengl.org has pointed out that the clip space coordinates the projection produces are divided by clipPos.w to compute the normalized device coordinates. When reversing the steps from fragment over ndc to clip space coordinates, you need to reconstruct that w (which happens to be -z from the corresponding view space (camera) coordinate), and multiply the ndc coordinate with that value to compute the proper clip space coordinate (which you can turn into a view space coordinate by multiplying it with the inverse projection matrix).

The following code assumes that you are processing the frame buffer in a post process. When processing it while rendering geometry, you can use gl_FragCoord.z instead of texture2D (sceneDepth, ndcPos.xy).r.

Here is the code:

uniform sampler2D sceneDepth;
uniform mat4 projectionInverse;
uniform vec2 clipPlanes; // zNear, zFar
uniform vec2 windowSize; // window width, height

#define ZNEAR clipPlanes.x
#define ZFAR clipPlanes.y

#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE -(C / (A + D))

void main() 
{
vec3 ndcPos;
ndcPos.xy = gl_FragCoord.xy / windowSize;
ndcPos.z = texture2D (sceneDepth, ndcPos.xy).r; // or gl_FragCoord.z
ndcPos -= 0.5;
ndcPos *= 2.0;
vec4 clipPos;
clipPos.w = -ZEYE;
clipPos.xyz = ndcPos * clipPos.w;
vec4 eyePos = projectionInverse * clipPos;
}

Basically this is a GLSL version of gluUnproject.

手心的温暖 2024-11-08 15:12:47

我刚刚意识到没有必要在片段着色器中进行这些计算。您可以通过在 CPU 上执行此操作并将其与 MVP 逆相乘来节省一些操作(假设 glDepthRange(0, 1),请随意编辑):

glm::vec4 vp(left, right, width, height);
glm::mat4 viewportMat = glm::translate(
    vec3(-2.0 * vp.x / vp.z - 1.0, -2.0 * vp.y / vp.w - 1.0, -1.0))
  * glm::scale(glm::vec3(2.0 / vp.z, 2.0 / vp.w, 2.0));
glm::mat4 mvpInv = inverse(mvp);
glm::mat4 vmvpInv = mvpInv * viewportMat;
shader->uniform("vmvpInv", vmvpInv);

在着色器中:

vec4 eyePos = vmvpInv * vec4(gl_FragCoord.xyz, 1);
vec3 pos = eyePos.xyz / eyePos.w;

I just realized that it's unnecessary to do these computations in the fragment shader. You can save a couple operations by doing this on the CPU and multiplying it with the MVP inverse (assuming glDepthRange(0, 1), feel free to edit):

glm::vec4 vp(left, right, width, height);
glm::mat4 viewportMat = glm::translate(
    vec3(-2.0 * vp.x / vp.z - 1.0, -2.0 * vp.y / vp.w - 1.0, -1.0))
  * glm::scale(glm::vec3(2.0 / vp.z, 2.0 / vp.w, 2.0));
glm::mat4 mvpInv = inverse(mvp);
glm::mat4 vmvpInv = mvpInv * viewportMat;
shader->uniform("vmvpInv", vmvpInv);

In the shader:

vec4 eyePos = vmvpInv * vec4(gl_FragCoord.xyz, 1);
vec3 pos = eyePos.xyz / eyePos.w;
巴黎夜雨 2024-11-08 15:12:47

我认为所有可用的答案都从某个方面触及了问题,并且 khronos.org 有一个 Wiki 页面,其中列出了一些不同的案例并用着色器代码进行了解释,因此值得在此发布。
从窗口空间计算眼空间

I think all available answers are touching the problem from an aspect, and khronos.org has a Wiki page with a few different cases listed and explained with shader code, so it's worth posting here.
Compute eye space from window space.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文