设置 OpenGL 多个渲染目标
我已经看过很多关于这个主题的材料,但是我发现的示例之间存在一些差异,并且我很难充分理解正确的过程。希望有人能告诉我我是否走在正确的道路上。我还应该提到我是在 OS X Snow Leopard 和最新版本的 Xcode 3 上执行此操作。
为了举例,假设我想写入两个目标,一个用于正常,一个用于颜色。为此,我创建一个帧缓冲区并向其绑定两个纹理以及一个深度纹理:
glGenFramebuffersEXT(1, &mFBO);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, mFBO);
glGenTextures(1, &mTexColor);
glBindTexture(GL_TEXTURE_2D, mTexColor);
//<texture params>
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, mTexColor, 0);
glGenTextures(1, &mTexNormal);
glBindTexture(GL_TEXTURE_2D, mTexNormal);
//<Texture params>
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_TEXTURE_2D, mTexNormal, 0);
glGenTextures(1, &mTexDepth);
glBindTexture(GL_TEXTURE_2D, mTexDepth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, w, h, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, mTexDepth, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)
在渲染之前,我将再次绑定帧缓冲区,然后执行以下操作:
GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
glDrawBuffers(2, buffers);
这意味着进一步的绘制调用将绘制到我的帧缓冲区。 (我认为?)
然后我会设置着色器并绘制场景。在我的顶点着色器中,我将像往常一样处理法线/位置/颜色,并将数据传递给片段着色器。然后该片段会执行以下操作:
gl_FragData[0] = OutputColor;
gl_FragData[1] = OutputNormal;
此时,我应该有两个纹理;一种包含所有渲染对象的颜色,另一种包含法线。这一切都是正确的吗?我现在应该能够像使用其他纹理一样使用这些纹理,比如说将它们渲染到全屏四边形,对吗?
I've seen a lot of material on this subject, but there are some differences between the examples I've found and I'm having a hard time getting a solid understanding of the correct process. Hopefully someone can tell me if I'm on the right track. I should also mention I'm doing this on OS X Snow Leopard and the latest version of Xcode 3.
For the sake of example, let's say that I want to write to two targets, one for normal and one for color. To do this I create one framebuffer and bind two textures to it, as well as a depth texture:
glGenFramebuffersEXT(1, &mFBO);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, mFBO);
glGenTextures(1, &mTexColor);
glBindTexture(GL_TEXTURE_2D, mTexColor);
//<texture params>
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, mTexColor, 0);
glGenTextures(1, &mTexNormal);
glBindTexture(GL_TEXTURE_2D, mTexNormal);
//<Texture params>
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_TEXTURE_2D, mTexNormal, 0);
glGenTextures(1, &mTexDepth);
glBindTexture(GL_TEXTURE_2D, mTexDepth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, w, h, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, mTexDepth, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)
Before rendering, I would bind the framebuffer again and then do:
GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
glDrawBuffers(2, buffers);
This would mean further draw calls would draw to my framebuffer. (I think?)
I'd then set my shaders and draw the scene. In my vertex shader I would process normals/positions/colors as usual, and pass the data to the fragment shader. The fragment would then do something like:
gl_FragData[0] = OutputColor;
gl_FragData[1] = OutputNormal;
At this point, I should have two textures; one with colors from all the rendered objects and one with normals. Is all of this correct? I should now be able to use those textures like any other, say rendering them to a fullscreen quad, right?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
听起来和看起来都很合理。这确实是常见的做法。如果您不需要深度数据作为纹理进行进一步处理,您也可以使用渲染缓冲区作为附件,但纹理也应该可以正常工作。
您还可以在完成所有设置后使用
glCheckFramebufferStatusEXT
来查看帧缓冲区在其当前配置中是否有效,但您的代码看起来不错。如果您没有遇到问题并且这只是为了保证,那么请放心您走在正确的道路上,否则请告诉我们出了什么问题。Sounds and looks reasonable. This is indeed the common way to do it. If you don't need the depth data as texture for further processing, you can also use a renderbuffer for an attachment, but a texture should also work fine.
You can also use
glCheckFramebufferStatusEXT
after all the setup is done, to see if the framebuffer is valid in its current configuration, but your code looks fine. If you don't have a problem and this was just for assurance, then rest assured that you're on the right track, otherwise tell us what's wrong.