如何写入 OpenGL 深度缓冲区
我正在尝试实现一种老式技术,其中使用渲染的背景图像和预设深度信息来遮挡场景中的其他对象。
因此,例如,如果您有一张房间的图片,前景中的天花板上悬挂着一些电线,则这些电线会在深度图中被赋予浅深度值,并且在正确渲染时,允许角色在电线“后面”行走,但在房间内其他物体的前面。
到目前为止,我已经尝试使用以下方法创建深度纹理:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Image.GetWidth(), Image.GetHeight(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, Pixels);
然后就可以了将其绑定到四边形并在屏幕上渲染它,但它不会写入纹理中的深度值。
我也尝试过:
glDrawPixels(Image.GetWidth(), Image.GetHeight(), GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, Pixels);
但这会使我的帧速率降低到大约 0.25 fps...
我知道您可以通过将 gl_fragDepth 设置为纹理中的值来在像素着色器中执行此操作,但我想知道是否可以使用非像素着色器启用的硬件来实现此目的?
I'm trying to implement an old-school technique where a rendered background image AND preset depth information is used to occlude other objects in the scene.
So for instance if you have a picture of a room with some wires hanging from the ceiling in the foreground, these are given a shallow depth value in the depthmap, and when rendered correctly, allows the character to walk "behind" the wires but in front of other objects in the room.
So far I've tried creating a depth texture using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, Image.GetWidth(), Image.GetHeight(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels);
Then just binding it to a quad and rendering that over the screen, but it doesnt write the depth values from the texture.
I've also tried:
glDrawPixels(Image.GetWidth(), Image.GetHeight(), GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, pixels);
But this slows down my framerate to about 0.25 fps...
I know that you can do this in a pixelshader by setting the gl_fragDepth to a value from the texture, but I wanted to know if I could achieve this with non-pixelshader enabled hardware?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
在尝试让深度纹理发挥作用后,我尝试了 ARB_fragment_program 扩展,看看是否可以编写一个非常简单(因此与旧硬件广泛兼容)的 GPU 组件片段着色器来实现这一目的,但我最终决定使用GLSL 作为 ARB_fragment_program 扩展似乎已被弃用,或者至少现在它们的使用是不受欢迎的。
After trying to get a depth texture to work, I then tried the ARB_fragment_program extension to see if I could write a very simple (and therefore widely compatible with old hardware) GPU assembly fragment shader to do the trick, but I eventually just decided to use GLSL as the ARB_fragment_program extensions seem to be deprecated or at least their usage is frowned upon nowadays.
您可以尝试使用帧缓冲区对象来执行您想要的操作。
创建 fbo1 并附加颜色和深度缓冲区。
渲染或设置背景场景。
创建 fbo2 并附加颜色和深度缓冲区。
在每次更新时使用帧缓冲区 blit 将颜色和深度缓冲区从 fbo1 传输到 fbo2。
将其余部分渲染到 fbo2
将 fbo2 颜色缓冲区全屏传输到主显示缓冲区。
You can try using frame buffer objects to do what you want.
Create fbo1 and attach a color and depth buffer.
Render or set the scene for the background.
Create fbo2 and attach a color and depth buffer.
Use frame buffer blit at each update to blit the color and depth buffers from fbo1 to fbo2.
Render the rest into fbo2
Do a full screen blit of fbo2 color buffer to the main display buffer.
禁用写入颜色缓冲区(使用:
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE
),然后使用适当的 Z 坐标渲染表示“前面”图像的多边形。Disable writing to the color buffer (use:
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE
), then render polygons representing your "in-front" image with an appropriate Z-coordinate.