是否从GLSL着色器中提取二进制文件是标准的,受支持的操作?如果是这样,我们如何建立Glad.C来支持它?

发布于 2025-02-07 10:06:40 字数 1784 浏览 3 评论 0 原文

我们一直在开发一个OpenGL程序,在两个夏天之前,Glad是在Linux和Windows上建立的,在Ubuntu 20.04lt下的Nvidia 2060,Windows和Ubuntu,GeForce 940MX等。

在Linux上,我个人使用的驱动程序是这台笔记本电脑上的Nouveau。

*-display                 
   description: VGA compatible controller
   product: HD Graphics 620
   vendor: Intel Corporation
   physical id: 2
   bus info: pci@0000:00:02.0
   version: 02
   width: 64 bits
   clock: 33MHz
   capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
   configuration: driver=i915 latency=0
   resources: irq:129 memory:a2000000-a2ffffff memory:b0000000-bfffffff ioport:4000(size=64) memory:c0000-dffff

  *-display
   description: 3D controller
   product: GM108M [GeForce 940MX]
   vendor: NVIDIA Corporation
   physical id: 0
   bus info: pci@0000:01:00.0
   version: a2
   width: 64 bits
   clock: 33MHz
   capabilities: pm msi pciexpress bus_master cap_list rom
   configuration: driver=nouveau latency=0
   resources: irq:131 memory:a3000000-a3ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:3000(size=128)

在上一个问题中,我问为什么在尝试从着色器程序中获取二进制文件时,我们为什么要获得segfault。给出的零碎的答案是,也许很高兴。C建造了错误。

在我看来,这不是一个可接受的答案,但也许我需要构建一个更好的问题。 从二进制着色器中提取代码时,请调试OpenGL segfault

  1. 是否正在从着色器程序中提取二进制文件,该标准功能将适用于所有现代OpenGL和驱动程序?假设Windows/Intel,Windows/Nvidia,Linux/Intel,Linux/Nvidia Neuveau和/或带有NVIDIA驱动程序的Linux/NVDIA。

  2. 如果它在某些平台上不起作用,那么对此进行测试的干净程序化方法是什么?我如何判断该功能是否不支持,以便如果不存在,我可以动态禁用它?

  3. 如果我们不正确地生成了Glad.C,这就是该功能不起作用的原因,我该如何正确生成它?我只是去了glad.david.de,选择了OpenGL 4.6核心并生成。那对吗?如果没有,我该怎么办?

We have been working on an OpenGL program where glad was built two summers ago, working on Linux and windows on cards such as NVIDIA 2060 under Ubuntu 20.04LTS, Intel on Windows and Ubuntu, GeForce 940mx, and others.

On Linux the driver I personally am using is nouveau on this laptop.

*-display                 
   description: VGA compatible controller
   product: HD Graphics 620
   vendor: Intel Corporation
   physical id: 2
   bus info: pci@0000:00:02.0
   version: 02
   width: 64 bits
   clock: 33MHz
   capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
   configuration: driver=i915 latency=0
   resources: irq:129 memory:a2000000-a2ffffff memory:b0000000-bfffffff ioport:4000(size=64) memory:c0000-dffff

  *-display
   description: 3D controller
   product: GM108M [GeForce 940MX]
   vendor: NVIDIA Corporation
   physical id: 0
   bus info: pci@0000:01:00.0
   version: a2
   width: 64 bits
   clock: 33MHz
   capabilities: pm msi pciexpress bus_master cap_list rom
   configuration: driver=nouveau latency=0
   resources: irq:131 memory:a3000000-a3ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:3000(size=128)

In a previous question, I asked why we were getting a segfault when trying to get the binaries from a shader program. The fragmentary answer given was that perhaps glad.c was built wrong.

This isn't, in my view an acceptable answer but perhaps I need to construct a better question.
Is there any way to debug OpenGL segfaulting when extracting code from binary shader

  1. Is extracting binary from shader programs a standard feature that will work on all modern openGL and drivers? Let's say windows/Intel, windows/NVIDIA, linux/Intel, linux/NVIDIA Neuveau, and/or Linux/NVDIA with an NVIDIA driver.

  2. If it doesn't work on some platforms, what is the clean programmatic way to test for this? How do I tell if the feature is not supported so I can dynamically disable it if it does not exist?

  3. If we have generated glad.c incorrectly, and that is the reason the feature is not working, how do I generate it correctly? I just went to glad.david.de, selected opengl 4.6 core and generated. Is that right? If not, what do I do?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

长伴 2025-02-14 10:06:40
  1. 从着色器程序中提取二进制文件,标准功能将适用于所有现代OpenGL和驱动程序?假设Windows/Intel,
    Windows/Nvidia,Linux/Intel,Linux/Nvidia Neuveau和/或Linux/NVDIA
    与nvidia驱动程序。

OpenGL扩展。自版4.1版以来,此功能也可以在OpenGL中获得。这意味着如果以下任何一个是正确的,则可以使用此功能:

  • 您使用的GL上下文至少具有
  • 您使用的GL实现,即通过包括>来广告(在此上下文上)广告此功能。 gl_arb_get_program_binary 在GL扩展字符串中。

每个合理的现代GPU都应支持GL 4.1,因此该功能应广泛使用。但是,某些实现可能仅支持OpenGL 4.X,仅在 core profile 中。如果您使用兼容性或旧版个人资料,那么您可能会不幸。

  1. 如果它在某些平台上不起作用,那么对此进行测试的清洁程序方法是什么?我如何判断该功能是否不支持,以便如果不存在,我可以动态禁用它?

这是完全具有扩展机制的要点之一。由于您使用了GLAD GL加载器,因此可以很容易地通过Glad来完成。创建上下文并初始化的高兴后,您可以在运行时查询此功能的可用性,

if (GLAD_GL_VERSION_4_1 || GLAD_GL_get_program_binary) {
  // feature is available...
}

因为Core OpenGL和扩展名确实指定了完全相同的功能和枚举名称,而无需任何扩展后缀,您只需使用这些函数即可通过Core OpenGL功能集或扩展名获取。

请注意,创建上下文以及创建上下文时请求哪个版本至关重要。如果您要求以下verssion 4.1的上下文,即使实现技术支持该版本,也可能不会得到一个上下文。通常,无论如何,该扩展名都可以在这种情况下可用,但这不是要求。

  1. 如果我们不正确地生成了Glad.C,这就是该功能不起作用的原因,我该如何正确生成它?我只是去了glad.david.de,选择了OpenGL 4.6核心并生成。那对吗?如果没有,我该怎么办?

上述代码工作的唯一要求是,您至少为OpenGL 4.1和 GL_ARB_GET_PROGY_BINACY_BINARY 扩展程序生成了Glad Loader。如果您为4.6生成并遗漏了扩展名,那么Glad将永远不会寻找该扩展名,而 glad_gl_get_program_binary 将无法定义。然后,如果您使用GL Contexts&lt,您将错过使用扩展的能力; 4.1,即使您的GL实施将支持它。

  1. Is extracting binary from shader programs a standard feature that will work on all modern openGL and drivers? Let's say windows/Intel,
    windows/NVIDIA, linux/Intel, linux/NVIDIA Neuveau, and/or Linux/NVDIA
    with an NVIDIA driver.

Retrieving the binary represantation of a compiled shader program is specified in the ARB_get_program_binary OpenGL extension. This feature is also available in OpenGL since version 4.1. This means that you can use this feature if any of the following is true:

  • The GL context you're using has at least Version 4.1
  • The GL implementation you are using advertises the aviability of this feature (on this context) by including GL_ARB_get_program_binary in the GL extension string.

Every reasonably modern GPU should support GL 4.1, so this feature should be widely available. However, some implementations may support OpenGL 4.x only in core profile. If you work with compatibility or legacy profiles, you may be out of luck.

  1. If it doesn't work on some platforms, what is the clean programmatic way to test for this? How do I tell if the feature is not supported so I can dynamically disable it if it does not exist?

This is one of the main points for having an extension mechanism at all. Since you used the glad GL loader, this can be done via glad quite easily. After you created the context and initialized glad, you can query the availability of this feature at runtime by

if (GLAD_GL_VERSION_4_1 || GLAD_GL_get_program_binary) {
  // feature is available...
}

Since core OpenGL and the extension do specify exactly the same function and enum names without any extension suffix, you can just use these functions no matter if they were acquired via core OpenGL feature-set or the extension.

Please note that it will matter how you create the context, and which version you request when you create the context. If you ask for a context below verssion 4.1, you might not get one even if the implementation techically would support that version. Typically, the extension would be available in that case anyway, but that isn't a requirement.

  1. If we have generated glad.c incorrectly, and that is the reason the feature is not working, how do I generate it correctly? I just went to glad.david.de, selected opengl 4.6 core and generated. Is that right? If not, what do I do?

The only requirements for the above code to work is that you generated the GLAD loader for at least OpenGL 4.1 and for the GL_ARB_get_program_binary extension. If you generated for 4.6 and left out the extension, then glad will never look for that extension and GLAD_GL_get_program_binary will not be defined. Then, you will miss out on the ability to use the extension if you work with GL contexts < 4.1, even if it would be supported by your GL implementation.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文