OpenGL中Framebuffer和Renderbuffer的概念和区别是什么?
我对帧缓冲区和渲染缓冲区的概念感到困惑。我知道它们需要渲染,但我想在使用之前了解它们。
我知道需要一些位图缓冲区来存储临时绘图结果。后台缓冲区。当这些绘图正在进行时,需要在屏幕上看到另一个缓冲区。前缓冲区。 然后翻转它们,然后再次绘制。我知道这个概念,但很难将这些对象与这个概念联系起来。
它们的概念和区别是什么?
I'm confused about concept of Framebuffer and Renderbuffer. I know that they're required to render, but I want to understand them before use.
I know some bitmap buffer is required to store the temporary drawing result. The back buffer. And the other buffer is required to be seen on screen when those drawings are in progress. The front buffer.
And flip them, and draw again. I know this concept, but it's hard to connect those objects to this concept.
What's the concept of and differences of them?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Framebuffer 对象实际上并不是缓冲区,而是一个包含一个或多个附件的聚合器对象,而这些附件又是实际的缓冲区。您可以将 Framebuffer 理解为 C 结构,其中每个成员都是指向缓冲区的指针。如果没有任何附件,Framebuffer 对象的占用空间非常小。
现在,附加到帧缓冲区的每个缓冲区都可以是渲染缓冲区或纹理。
Renderbuffer 是一个实际的缓冲区(字节、整数或像素数组)。 Renderbuffer 以本机格式存储像素值,因此它针对离屏渲染进行了优化。换句话说,绘制到 Renderbuffer 比绘制到纹理要快得多。缺点是像素使用本机的、依赖于实现的格式,因此从 Renderbuffer 读取比从纹理读取要困难得多。尽管如此,一旦绘制了渲染缓冲区,就可以使用像素传输操作非常快速地将其内容直接复制到屏幕(或者我猜到其他渲染缓冲区)。这意味着 Renderbuffer 可用于有效地实现您提到的双缓冲区模式。
渲染缓冲区是一个相对较新的概念。在此之前,帧缓冲区用于渲染纹理,这可能会比较慢,因为纹理使用标准格式。仍然可以渲染到纹理,当需要对每个像素执行多次传递以构建场景或在另一个场景的表面上绘制场景时,这非常有用!
OpenGL wiki 有此页面,其中显示了更多详细信息和链接。
The Framebuffer object is not actually a buffer, but an aggregator object that contains one or more attachments, which by their turn, are the actual buffers. You can understand the Framebuffer as C structure where every member is a pointer to a buffer. Without any attachment, a Framebuffer object has very low footprint.
Now each buffer attached to a Framebuffer can be a Renderbuffer or a texture.
The Renderbuffer is an actual buffer (an array of bytes, or integers, or pixels). The Renderbuffer stores pixel values in native format, so it's optimized for offscreen rendering. In other words, drawing to a Renderbuffer can be much faster than drawing to a texture. The drawback is that pixels uses a native, implementation-dependent format, so that reading from a Renderbuffer is much harder than reading from a texture. Nevertheless, once a Renderbuffer has been painted, one can copy its content directly to screen (or to other Renderbuffer, I guess), very quickly using pixel transfer operations. This means that a Renderbuffer can be used to efficiently implement the double buffer pattern that you mentioned.
Renderbuffers are a relatively new concept. Before them, a Framebuffer was used to render to a texture, which can be slower because a texture uses a standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each pixel to build a scene, or to draw a scene on a surface of another scene!
The OpenGL wiki has this page that shows more details and links.
此页面有一些细节,我认为这些细节很好地解释了差异。首先:
然而:
This page has some details which I think explain the difference quite nicely. Firstly:
Whereas: