将深度缓冲区渲染到纹理
在着色器方面相当新鲜,所以如果我在这里做了一些愚蠢的事情,请耐心等待。 :)
我试图在 iOS 上使用 opengl ES 2.0 将场景的深度缓冲区渲染为纹理,但我似乎没有得到完全准确的结果,除非模型在显示屏上显示的多边形密度相对较高。
因此,例如,如果我渲染一个仅由四个顶点组成的大平面,我会得到非常不准确的结果,但如果我细分这个平面,每个细分的结果会变得更准确,最终我会得到正确渲染的深度缓冲区。
这让我想起了很多关于仿射与透视投影纹理映射问题,我想我需要以某种方式使用“.w”组件来解决这个问题。但我认为“变化的”变量应该已经考虑到这一点,所以我在这里有点不知所措。
这是我的顶点和片段着色器:
[vert]
uniform mat4 uMVPMatrix;
attribute vec4 aPosition;
varying float objectDepth;
void main()
{
gl_Position=uMVPMatrix * aPosition;
objectDepth=gl_Position.z;
}
[frag]
precision mediump float;
varying float objectDepth;
void main()
{
//Divide by scene clip range, set to a constant 200 here
float grayscale=objectDepth/200.0;
gl_FragColor = vec4(grayscale,grayscale,grayscale,1.0);
}
请注意,该着色器经过了很多简化,只是为了突出显示我正在使用的方法。虽然对于肉眼来说,它在大多数情况下似乎工作得很好,但实际上我正在渲染 32 位纹理(通过将浮点打包到 ARGB 中),并且我需要非常高的精度来进行后续处理,否则我会得到明显的伪影。
我可以通过增加多边形数量来实现相当高的精度,但这会使我的帧速率下降很多,那么,有更好的方法吗?
Quite new at shaders, so please bear with me if I am doing something silly here. :)
I am trying to render the depth buffer of a scene to a texture using opengl ES 2.0 on iOS, but I do not seem to get entirely accurate results unless the models have a relatively high density of polygons showing on the display.
So, for example if I render a large plane consisting of only four vertices, I get very inaccurate results, but if I subdivide this plane the results get more accurate for each subdivision, and ultimately I get a correctly rendered depth buffer.
This reminds me a lot about affine versus perspective projected texture mapping issues, and I guess I need to play around with the ".w" component somehow to fix this. But I thought the "varying" variables should take this into account already, so I am a bit at loss here.
This is my vertex and fragment shader:
[vert]
uniform mat4 uMVPMatrix;
attribute vec4 aPosition;
varying float objectDepth;
void main()
{
gl_Position=uMVPMatrix * aPosition;
objectDepth=gl_Position.z;
}
[frag]
precision mediump float;
varying float objectDepth;
void main()
{
//Divide by scene clip range, set to a constant 200 here
float grayscale=objectDepth/200.0;
gl_FragColor = vec4(grayscale,grayscale,grayscale,1.0);
}
Please note that this shader is simplified a lot just to highlight the method I am using. Although for the naked eye it seems to work well in most cases, I am in fact rendering to 32 bit textures (by packing a float into ARGB), and I need very high accuracy for later processing or I get noticeable artifacts.
I can achieve pretty high precision by cranking up the polygon count, but that drives my framerate down a lot, so, is there a better way?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您需要将 z 除以 w 分量。
You need to divide z by the w component.
这很简单,深度不是线性的,所以你不能对 z 使用线性插值......如果你插值 1/z 而不是 z,你会很容易地解决它。您还可以执行一些 w 数学运算,完全按照 rasmus 的建议。
您可以在 http://www.luki.webzdarma.cz/eng_05_en.htm 阅读有关坐标插值的更多信息(有关实现简单软件渲染器的页面)
This is very simple, the depth is not linear so you can not use linear interpolation for z ... you will solve it very easily if you interpolate 1/z instead of z. You can also perform some w math, exactly as suggested by rasmus.
You can read more about coordinate interpolation at http://www.luki.webzdarma.cz/eng_05_en.htm (page about implementing a simple software renderer)