使用着色器进行计算
是否可以使用着色器来计算一些值,然后将它们返回以供进一步使用?
例如,我将网格发送到 GPU,其中包含一些有关如何修改网格的参数(更改顶点位置),并收回生成的网格?我认为这是不可能的,因为我没有看到任何用于从着色器到 CPU 通信的变量。我使用的是 GLSL,所以只有统一、属性和变化。我应该使用属性还是统一,渲染后它们仍然有效吗?我可以更改这些变量的值并在 CPU 中读回它们吗?有一些方法可以在 GPU 中映射数据,但是这些方法会改变并且有效吗?
这就是我思考这个问题的方式,尽管可能还有其他方式,但我不知道。如果有人能向我解释这一点,我会很高兴,因为我刚刚读了一些有关 GLSL 的书籍,现在我想编写更复杂的着色器,并且我不想减轻目前不可能的方法。
谢谢
Is it possible to use shader for calculating some values and then return them back for further use?
For example I send mesh down to GPU, with some parameters about how it should be modified(change position of vertices), and take back resulting mesh? I see that rather impossible because I haven't seen any variable for comunication from shaders to CPU. I'm using GLSL so there are just uniform, atributes and varying. Should I use atribute or uniform, would they be still valid after rendering? Can I change values of those variables and read them back in CPU? There are methods for mapping data in GPU but would those be changed and valid?
This is the way I'm thinking about this, though there could be other way, which is unknow to me. I would be glad if someone could explain me this, as I've just read some books about GLSL and now I would like to program more complex shaders, and I wouldn't like to relieve on methods that are impossible at this time.
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我最好的猜测是将您发送到 BehaveRT 这是一个为利用 GPU 进行行为而创建的库模型。我认为如果你可以在库中制定你的修改,你可以从它的抽象中受益
关于你的CPU和GPU之间来回传递的数据,我会让你浏览文档,我不确定
My best guess would be to send you to BehaveRT which is a library created to harness GPUs for behavorial models. I think that if you can formulate your modifications in the library, you could benefit from its abstraction
About the data passing back and forth between your cpu and gpu, i'll let you browse the documentation, i'm not sure about it
好问题!欢迎来到图形处理单元通用计算的美丽新世界 (GPGPU)。
您想要做的事情可以通过像素着色器实现。您加载纹理(即:数据),应用着色器(以执行所需的计算),然后使用“渲染到纹理”将结果数据从 GPU 传递到主内存 (RAM)。
为此目的创建了一些工具,最著名的是OpenCL和CUDA。它们极大地帮助了 GPGPU,因此这种编程看起来几乎就像 CPU 编程一样。
它们不需要任何 3D 图形经验(尽管仍然是首选:))。您不需要对纹理进行处理,只需将数组加载到 GPU 内存中即可。处理算法是用稍加修改的 C 版本编写的。最新版本的 CUDA 支持 C++。
我建议从 CUDA 开始,因为它是最成熟的:http://www.nvidia。 com/object/cuda_home_new.html
Great question! Welcome to the brave new world of General-Purpose Computing on Graphics Processing Units (GPGPU).
What you want to do is possible with pixel shaders. You load a texture (that is: data), apply a shader (to do the desired computation) and then use Render to Texture to pass the resulting data from the GPU to the main memory (RAM).
There are tools created for this purpose, most notably OpenCL and CUDA. They greatly aid GPGPU so that this sort of programming looks almost as CPU programming.
They do not require any 3D graphics experience (although still preferred :) ). You don't need to do tricks with textures, you just load arrays into the GPU memory. Processing algorithms are written in a slightly modified version of C. The latest version of CUDA supports C++.
I recommend to start with CUDA, since it is the most mature one: http://www.nvidia.com/object/cuda_home_new.html
在使用 Open CL、Microsoft Direct Compute(DirectX 11 的一部分)或 CUDA 的现代显卡上,这很容易实现。使用普通的着色器语言(例如 GLSL、HLSL)。前两个可以在 Nvidia 和 ATI 显卡上运行,cuda 是 nvidia 独有的。
这些是用于在显卡上计算内容的特殊库。我不会为此使用普通的 3D API,尽管可以通过一些解决方法来实现。
This is easily possible on modern graphics cards using either Open CL, Microsoft Direct Compute (part of DirectX 11) or CUDA. The normal shader languages are utilized (GLSL, HLSL for example). The first two work on both Nvidia and ATI graphics cards, cuda is nvidia exclusive.
These are special libaries for computing stuff on the graphics card. I wouldn't use a normal 3D API for this, althought it is possible with some workarounds.
现在,您可以使用 OpenGL 中的着色器缓冲区对象在着色器中写入可在主机中读取的值。
Now you can use shader buffer objects in OpenGL to write values in shaders that can be read in host.