如何正确处理从渲染到纹理获得的深度数据范围?
我正在对连接到内部格式 GL_DEPTH_STENCIL 的纹理的深度缓冲区进行一些自定义的 2D 渲染。在片段着色器中,归一化的 Z 值(仅使用 0.0 到 1.0,我很懒)是从某个进程显式写入的:
in float some_value;
uniform float max_dist;
void main()
{
float dist = some_process( some_value );
gl_FragDepth = clamp( dist / max_dist, 0.0, 1.0 );
}
现在我需要在 CPU 端对生成的位图执行进一步的处理。但是,glGetTexImage
将为您提供深度模板数据的 GL_UNSIGNED_INT_24_8
二进制格式。我应该如何处理 24 位深度组件? [-1.0, 1.0] 的标准化浮点 Z 值如何映射到 24 位整数?
I'm doing some customized 2D rendering to depth buffer that connects to a texture of internal format GL_DEPTH_STENCIL
. In fragment shader, the normalized Z value (only 0.0 to 1.0 is used, I'm lazy) is explicitly written from some process:
in float some_value;
uniform float max_dist;
void main()
{
float dist = some_process( some_value );
gl_FragDepth = clamp( dist / max_dist, 0.0, 1.0 );
}
Now I need to perform further process on the resultant bitmap on CPU side. However, glGetTexImage
would give you GL_UNSIGNED_INT_24_8
binary format for a depth-stencil data. What should I do with the 24-bit depth component? How does the normalized floating-point Z value of [-1.0, 1.0] map to the 24-bit integer?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论