寻找有关调试奇怪的点光阴影伪像的提示

发布于 2025-02-10 16:12:48 字数 964 浏览 4 评论 0原文

这使我坚持了一段时间,所以我希望有人可以提供一些智慧(或至少一些有关如何弄清楚到底发生了什么的技巧!)。

我有一个支持DX11,DX12和Vulkan的渲染器,该渲染器支持某些(相当基本的,简单的Cubemap深度)阴影用于点灯。在较低的帧速率下,这些工作正常,但是在疯狂的高框架速率下,随着灯光的移动,我会得到阴影的伪像,看起来与光位置相比,阴影距离略有差。不过,这只是一个猜测,因为如果我停下来,或者在(例如)Renderdoc中捕获框架,则工件消失了。不可能抓住带有工件进行调试的框架,但我确实设法捕获了屏幕截图。仅当光从平面表面移开时才发生。

这仅与DX12和Vulkan一起表现出来,但两者都相同。

几周前,我修复了一个问题,其中更新已过时,而影子的生成是主要渲染背后的框架。这很容易复制和调试。这个新案例,不是很多!鉴于捕获时无法再现,因此很难追踪。

我通常不会太担心,因为当渲染任何有用的东西时,没有问题,因为帧速率较低,但是我担心我看到的是稍后会咬我的更系统性的结果。

编辑:到目前为止,我已经尝试过:

  • renderdoc(无法抓住表现出此问题的捕获 - 通过排队捕获或在表现时击中F12)

  • 调试可视化 - 它们显示了这个问题,但是一旦我暂停或执行捕获,该问题就会消失。

  • Tracy Profiler显示了何时更新常数以及计划何时安排任务的时间。与数据的更新和消耗无关。

  • 将小暂停放入主要应用程序循环中。这解决了问题(即使在主线程上暂停了.5毫秒,它将FPS从〜800fps降至600)。

This has had me stuck for a while, so I'm hoping someone can provide some wisdom (or at least some tips on how to figure out what the heck is going on!).

I have a renderer that supports DX11, DX12 and Vulkan, that supports some (pretty basic, just simple cubemap depth) shadows for point lights. At lower frame rates, these work just fine, but at crazy high frame rates and with the lights moving rapidly, I'm getting shadowing artifacts where it appears as if the shadow distance is off slightly compared to the light position. That's just a guess though, since if I pause, or do a frame capture in (for example) RenderDoc, the artifacts disappear. It's not been possible to grab a frame with the artifacts to debug, but I did manage to grab a screenshot. This only occurs when the light is moving away from the planar surface.

enter image description here

This only manifests with DX12 and Vulkan, but identical in both.

I fixed an issue a few weeks back where updates were out of order and the shadow generation was a frame behind the main rendering; this was pretty easy to repro and debug. This new case, not so much! Given the inability to repro when capturing, etc, it's been tough to track down.

I'd generally not be too concerned, since when rendering anything useful there's no issues since the framerate is lower, but I'm worried what I'm seeing is the result of something more systemic that'll bite me later.

Edit: So far, I've tried:

  • RenderDoc (unable to grab a capture manifesting this issue - either by queueing captures or hitting F12 when it manifests)

  • Debug visualizations - they show the issue, but as soon as I pause or perform a capture, the issue disappears.

  • Tracy profiler to show timings of when constants are updated and when tasks are scheduled. There's nowhere near any overlap with updates and consumption of data.

  • Putting small pauses in the main application loop. This fixes the issue (even a .5 millisecond pause on the main thread, which drops the fps from ~800fps to 600).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

廻憶裏菂餘溫 2025-02-17 16:12:48

我终于弄清楚了,毫不奇怪,这是一个非常愚蠢的错误。但是,尽管很愚蠢,但它也是一个相当笨拙的人,因为没有Vulkan验证错误/警告,但在DX12和Vulkan中都发生了,并且高度依赖时间。

我认为我应该发布解决方案,因为如果其他人碰巧犯了同样的错误(或类似的问题),并且在这个问题上偶然发现,我可以节省他们的痛苦。

所以,在这里。我的Vulkan统一缓冲区实现试图是聪明的,并创建了一个缓冲区的队列,这些缓冲区被渲染工作所消耗的最后框架标记。如果对统一缓冲区有更新,我会检查队列中的正面(最古老)元素,如果它的框架比我的框架缓冲计数更古老,我将重新使用它(覆盖统一的缓冲区数据)框架,然后将缓冲区扔到队列的背面。

但是,由于帧工作负载之间存在一些重叠,因此我在该计算中有一个逐一错误(对于DX12 vulkan),这意味着,根据时间,我'd在GPU仍在使用时,要覆盖统一的缓冲区,因此GPU将使用 Next Frame的光数据来计算点阴影。因此,从字面上解决该问题,并添加额外的缓冲框架,修复了这个烦人的(随机)问题。

旁注:始终,始终在启用Vulkan验证层时定期运行。尽管偶尔是神秘的,但它是非常彻底的,并且(在此案之外)帮助我遇到了许多问题,这本来可以追溯​​到否则。

I finally figured this out, and, not surprisingly, it was a really silly mistake. But despite being rather dumb, it was also a fairly gnarly one to figure out, since there were no Vulkan validation errors/warnings, it was occurring both in DX12 and Vulkan, and was highly dependent on timing.

I figured I should post the solution, since if anyone else happens to make this same mistake (or something similar), and stumbles on this question, I could save them some pain.

So, here it is. My Vulkan uniform buffer implementation tries to be smart, and creates a queue of buffers that are tagged with the last frame they were consumed by rendering work. If there's an update to a uniform buffer, I check the front (oldest) element in the queue, and if it's frame is older than my frame buffering count, I'll re-use it (overwriting the uniform buffer data) for the next frame, and toss that buffer to the back of the queue.

However, I had an off-by-one error in that calculation (for both DX12 and Vulkan) due to the fact that there's some overlap between frame workloads, and that meant that, depending on timing, I'd be overwriting a uniform buffer while the GPU was still using it, so the GPU would be calculating point shadows with the next frame's light data. So, literally fixing that off-by-one, and adding an extra frame of buffering, fixed this annoying (random) issue.

Side note: always, always run regularly with the Vulkan validation layers enabled. Whilst occasionally cryptic, it's incredibly thorough and (outside of this case) has helped me catch numerous issues that would have been a nightmare to track down otherwise.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文