为什么在使用 FBO 进行多重采样时 OpenGL 会照亮我的场景?

发布于 2025-01-01 13:03:26 字数 277 浏览 3 评论 0原文

我刚刚将 OpenGL 绘图代码从直接绘图到显示切换为使用附加渲染缓冲区的离屏 FBO。当我分配正常的渲染缓冲区存储时,离屏 FBO 会正确地传输到屏幕。

但是,当我在渲染缓冲区上启用多重采样(通过glRenderbufferStorageMultisample)时,场景中的每种颜色似乎都被增亮了(因此给出了与非多重采样部分不同的颜色)。

我怀疑我需要设置一些 glEnable 选项来保持相同的颜色,但我似乎在其他地方找不到任何提及此问题的信息。

有什么想法吗?

I just switched my OpenGL drawing code from drawing to the display directly to using an off-screen FBO with render buffers attached. The off-screen FBO is blitted to the screen correctly when I allocate normal render buffer storage.

However, when I enable multisampling on the render buffers (via glRenderbufferStorageMultisample), every color in the scene seems like it has been brightened (thus giving different colors than the non-multisampled part).

I suspect there's some glEnable option that I need to set to maintain the same colors, but I can't seem to find any mention of this problem elsewhere.

Any ideas?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

不弃不离 2025-01-08 13:03:26

由于样本位置不匹配而缺乏适当的下采样,我偶然发现了同样的问题。对我有用的是:

  • 一个单独的“单个样本”FBO,具有相同的附件、格式和尺寸(附加纹理或渲染缓冲区),用于下采样,然后将其绘制/位图传输到窗口缓冲区
  • 渲染到具有多重采样的多重采样窗口缓冲区通过使用 GLSL 片段着色器传递每个片段的所有相应样本,纹理具有与输入相同的样本计数。这适用于启用样本着色的情况,对于延迟着色来说是一种过度的方法,因为您可以计算每个样本的光、阴影、AO 等。
  • 我还使用 GLSL 对单个样本帧缓冲区进行了相当草率的手动下采样,其中我必须使用 texelFetch() 单独获取每个样本。

多重采样使事情变得非常慢。尽管 CSAA 的性能比 MSAA 更好,但当性能成为问题或所需的相当新的扩展(例如 ARB_texture_multisample)不可用时,我建议查看用于后处理的 FXAA 着色器,作为一个相当好的替代方案。

访问 GLSL 中的示例:

vec4 texelDownsampleAvg(sampler2DMS sampler,ivec2 texelCoord,const int sampleCount)
{
    vec4 accum = texelFetch(sampler,texelCoord,0);
    for(int sample = 1; sample < sampleCount; ++sample) {
        accum += texelFetch(sampler,texelCoord,sample);
    }
    return accum / sampleCount;
}

11) 不同位大小的缓冲区之间是否应该允许位块传送?

Resolved: Yes, for color buffers only.  Attempting to blit
between depth or stencil buffers of different size generates
INVALID_OPERATION.

13)BlitFramebuffer颜色空间转换应该如何进行
指定的?我们是否允许上下文钳位状态影响
位块传送?

Resolved: Blitting to a fixed point buffer always clamps,
blitting to a floating point buffer never clamps.  The context
state is ignored.

I stumbled upon the same problem, due to the lack of proper downsampling because of mismatching sample locations. What worked for me was:

  • A separate "single sample" FBO with identical attachments, format and dimension (with texture or renderbuffer attached) to blit into for downsampling and then draw/blit this to the window buffer
  • Render into a multisample window buffer with multisample texture having the same sample count as input, by passing all corresponding samples per fragment using a GLSL fragment shader. This worked with sample shading enabled and is the overkill approach for deferred shading as you can calculate light, shadow, AO, etc. per sample.
  • I did also rather sloppy manual downsampling to single sample framebuffers using GLSL, where I had to fetch each sample separately using texelFetch().

Things got really slow with multisampling. Although CSAA performed better than MSAA, I recommend to take a look at FXAA shaders for postprocessing as a considerable alternative, when performance is an issue or those rather new extensions required, such as ARB_texture_multisample, are not available.

Accessing samples in GLSL:

vec4 texelDownsampleAvg(sampler2DMS sampler,ivec2 texelCoord,const int sampleCount)
{
    vec4 accum = texelFetch(sampler,texelCoord,0);
    for(int sample = 1; sample < sampleCount; ++sample) {
        accum += texelFetch(sampler,texelCoord,sample);
    }
    return accum / sampleCount;
}

11) Should blits be allowed between buffers of different bit sizes?

Resolved: Yes, for color buffers only.  Attempting to blit
between depth or stencil buffers of different size generates
INVALID_OPERATION.

13) How should BlitFramebuffer color space conversion be
specified? Do we allow context clamp state to affect the
blit?

Resolved: Blitting to a fixed point buffer always clamps,
blitting to a floating point buffer never clamps.  The context
state is ignored.
活雷疯 2025-01-08 13:03:26

对我有用的解决方案是更改渲染缓冲区颜色格式。我选择了 GL_RGBA32F 和 GL_DEPTH_COMPONENT32F(认为我想要最高精度),并且 NVIDIA 驱动程序对此有不同的解释(我怀疑 sRGB 补偿,但我可能是错的)。

我发现有效的渲染缓冲区图像格式是 GL_RGBA8GL_DEPTH_COMPONENT24

The solution that worked for me was changing the renderbuffer color format. I picked GL_RGBA32F and GL_DEPTH_COMPONENT32F (figuring that I wanted the highest precision), and the NVIDIA drivers interpret that differently (I suspect sRGB compensation, but I could be wrong).

The renderbuffer image formats I found to work are GL_RGBA8 with GL_DEPTH_COMPONENT24.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文