通过texture2D函数使用自动比较从阴影贴图纹理中采样数据

发布于 2024-11-26 21:38:14 字数 695 浏览 1 评论 0原文

我的着色器中有一个sampler2DShadow,我想用它来实现阴影贴图。我的阴影纹理具有良好的初始值设定项,GL_TEXTURE_COMPARE_MODE 设置为 GL_COMPARE_R_TO_TEXTURE,GL_TEXTURE_COMPARE_FUNC 设置为 GL_LEQUAL(这意味着如果坐标的 r 值小于或等于纹理中获取的深度值,则比较应返回 1)。该纹理绑定到在光照空间坐标中渲染的 FBO 的 GL_DEPTH_ATTACHMENT。

我应该在最终的片段着色器中为texture2D函数指定什么坐标?我目前有一个

vec4 light_vert_pos 中的平滑

是由函数在顶点着色器中定义的

light_vert_pos = light_projection_camera_matrix*modelview*in_Vertex;

我假设我可以将我的照明乘以该值

texture2D(阴影贴图,(light_vert_pos.xyz)/light_vert_pos.w)

但这似乎不起作用。由于 light_vert_pos 仅位于后射影坐标中(用于创建它的矩阵是我用于在 FBO 中创建深度缓冲区的矩阵),我是否应该手动将 3 个 x/y/z 变量钳位到 [0,1]?

I've got a sampler2DShadow in my shader and I want to use it to implement shadow mapping. My shadow texture has the good initializers, with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE and GL_TEXTURE_COMPARE_FUNC set to GL_LEQUAL (meaning that the comparison should return 1 if the r value of my coordinates are less or equal to the depth value fetched in the texture). This texture is bound to the GL_DEPTH_ATTACHMENT of a FBO rendered in light space coordinates.

What coordinates should I give the texture2D function in my final fragment shader? I currently have a

smooth in vec4 light_vert_pos

set in my fragment shader that is defined in the vertex shader by the function

light_vert_pos = light_projection_camera_matrix*modelview*in_Vertex;

I would assume I could multiply my lighting by the value

texture2D(shadowmap,(light_vert_pos.xyz)/light_vert_pos.w)

but this does not seem to work. Since light_vert_pos is only in post projective coordinates (the matrix used to create it is the matrix I use to create the depth buffer in the FBO), should I manually clamp the 3 x/y/z variables to [0,1]?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

单身狗的梦 2024-12-03 21:38:14

您没有说明如何生成深度值。因此,我假设您通过使用法线投影渲染三角形来生成深度值。也就是说,您将几何体转换为相机空间,将其转换为投影空间,然后让光栅化管道正常处理那里的事情。

为了使阴影贴图正常工作,纹理坐标必须与光栅化器的坐标相匹配。

顶点着色器的输出是剪辑空间。从那里,您可以得到透视划分,然后是视口变换。后者使用 glViewport 和 glDepthRange 的值来计算窗口空间 XYZ。窗口空间 Z 是写入深度缓冲区的深度值。

请注意,这都是在深度通道期间发生的:生成阴影贴图的深度值。

但是,您可以采取一些捷径。如果您的 glViewport 范围设置为与纹理相同的大小(通常是这样做的),那么您可以忽略视口变换。您仍然需要在深度通道中使用的 glDepthRange。

在片段着色器中,您可以执行透视划分,这会将坐标放入标准化设备坐标 (NDC) 空间中。该空间在所有方向上都是 [-1, 1]。您的纹理坐标为 [0, 1],因此您需要将 X 和 Y 除以 2 并加上 0.5:

vec3 ndc_space_values = light_vert_pos.xyz / light_vert_pos.w
vec3 texCoords;
texCoords.xy = ndc_space_values.xy * 0.5 + 0.5;

要计算 Z 值,您需要知道用于 glDepthRange< 的近值和远值/代码>。

texCoords.z = ((f-n) * 0.5) * ndc_space_values.z + ((n+f) * 0.5);

其中 nf 是 glDepthRange 近距和远距值。您当然可以预先计算其中一些并将它们作为制服传递。或者,如果您使用近= 0 和远= 1 的默认范围,您会得到

texCoords.z = ndc_space_values.z * 0.5 + 0.5;

“哪个看起来很熟悉”。

旁白:

由于您使用 in 而不是 variing 定义输入,因此您必须使用 GLSL 1.30 或更高版本。那么为什么你使用 texture2D (这是一个旧函数)而不是 <代码>纹理

You don't say how you generated your depth values. So I'll assume you generated your depth values by rendering triangles using normal projection. That is, you transform the geometry to camera space, transform it to projection space, and let the rasterization pipeline handle things from there as normal.

In order to make shadow mapping work, your texture coordinates must match what the rasterizer did.

The output of a vertex shader is clip-space. From there, you get the perspective divide, followed by the viewport transform. The latter uses the values from glViewport and glDepthRange to compute the window-space XYZ. The window-space Z is the depth value written to the depth buffer.

Note that this is all during the depth pass: the generation of the depth values for the shadow map.

However, you can take some shortcuts. If your glViewport range was set to the same size as the texture (which is generally how it's done), then you can ignore the viewport transform. You will still need the glDepthRange you used in the depth pass.

In your fragment shader, you can perform the perspective divide, which puts the coordinates in normalized device coordinate (NDC) space. That space is [-1, 1] in all directions. Your texture coordinates are [0, 1], so you need to divide the X and Y by two and add 0.5 to them:

vec3 ndc_space_values = light_vert_pos.xyz / light_vert_pos.w
vec3 texCoords;
texCoords.xy = ndc_space_values.xy * 0.5 + 0.5;

To compute the Z value, you need to know the near and far values you use for glDepthRange.

texCoords.z = ((f-n) * 0.5) * ndc_space_values.z + ((n+f) * 0.5);

Where n and f are the glDepthRange near and far values. You can of course precompute some of these and pass them as uniforms. Or, if you use the default range of near=0 and far=1, you get

texCoords.z = ndc_space_values.z * 0.5 + 0.5;

Which looks familiar somehow.

Aside:

Since you defined your inputs with in rather than varying, you have to be using GLSL 1.30 or above. So why are you using texture2D (which is an old function) rather than texture?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文