在 GLSL 着色器中创建视图矩阵
我在 GPU 上以一维纹理存储了许多位置和方向。我想将它们用作 GLSL 几何着色器中的渲染源。为此,我需要从这些纹理创建相应的视图矩阵。
我的第一个想法是绕道 CPU,将纹理读取到内存并从那里创建一堆视图矩阵,使用诸如 glm::lookat()
之类的东西。然后将矩阵作为统一变量发送到着色器。
我的问题是,是否可以跳过这个弯路,直接在 GLSL 几何着色器中创建视图矩阵?另外,这种性能可行吗?
I have many positions and directions stored in 1D textures on the GPU. I want to use those as rendersources in a GLSL geometry shader. To do this, I need to create corresponding view matrices from those textures.
My first thought is to take a detour to the CPU, read the textures to memory and create a bunch of view matrices from there, with something like glm::lookat()
. Then send the matrices as uniform variables to the shader.
My question is, wether it is possible to skip this detour and instead create the view matrices directly in the GLSL geometry shader? Also, is this feasible performance wise?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
没有人说(或者没有人应该说)你的视图矩阵必须通过制服来自 CPU。您可以直接从着色器内部纹理中的向量生成视图矩阵。也许旧的 gluLookAt 的实现对您有帮助那里。
如果这种方法在性能方面是一个好主意,则是另一个问题,但如果该纹理相当大或频繁变化,则这种方法可能比将其读回 CPU 更好。
但是也许您可以使用一个简单的类似 GPGPU 的着色器将矩阵预先生成到另一个纹理/缓冲区中,该着色器只不过是为纹理中的每个位置/向量生成一个矩阵并将其存储在另一个纹理(使用 FBO)或缓冲区中(使用变换反馈)。这样,您不需要往返 CPU,也不需要为每个顶点/基元/其他内容重新生成矩阵。另一方面,这会增加所需的内存,因为 4x4 矩阵比位置和方向要重一些。
Nobody says (or nobody should say) that your view matrix has to come from the CPU through a uniform. You can just generate the view matrix from the vectors in your texture right inside the shader. Maybe the implementation of the good old gluLookAt is of help to you there.
If this approach is a good idea performance-wise, is another question, but if this texture is quite large or changes frequently, this aproach might be better than reading it back to the CPU.
But maybe you can pre-generate the matrices into another texture/buffer using a simple GPGPU-like shader that does nothing more than generate a matrix for each position/vector in the textures and store this in another texture (using FBOs) or buffer (using transform feedback). This way you don't need to make a roundtrip to the CPU and you don't need to generate the matrices anew for each vertex/primitive/whatever. On the other hand this will increase the required memory as a 4x4 matrix is a bit more heavy than a position and a direction.
当然。读取纹理,并根据值构建矩阵...
这有什么问题吗?或者我错过了什么?
Sure. Read the texture, and build the matrices from the values...
Any problem with this ? Or did I miss something ?
视图矩阵是一个制服,制服在渲染批处理过程中不会改变,也不能从着色器(直接)写入。到目前为止,我不知道如何生成它,至少不能直接生成它。
另请注意,几何着色器在使用模型视图矩阵转换顶点后运行,因此重新生成该矩阵或其一部分并没有太大意义(至少在同一遍期间)。
当然,您可能仍然可以对变换反馈进行一些修改,将一些值写入缓冲区,然后将其复制/绑定为统一缓冲区,或者只是从着色器中读取值并作为矩阵相乘。这至少可以避免与 CPU 的往返——问题是这种方法是否有意义以及您是否真的想做这样一件晦涩的事情。如果不确切知道您想要实现什么,则很难说出什么是最好的,但很可能只是在顶点着色器中进行转换(读取这些纹理,构建矩阵,乘法)会更好、更容易地工作。
The view matrix is a uniform, and uniforms don't change in the middle of a render batch, nor can they be written to from a shader (directly). Insofar I don't see how generating it could be possible, at least not directly.
Also note that the geometry shader runs after vertices have been transformed with the modelview matrix, so it does not make all too much sense (at least during the same pass) to re-generate that matrix or part of it.
You could of course probably still do some hack with transform feedback, writing some values to a buffer, and either copy/bind this as uniform buffer later or just read the values from within a shader and multiply as a matrix. That would at least avoid a roundtrip to the CPU -- the question is whether such an approach makes sense and whether you really want to do such an obscure thing. It is hard to tell what's best without knowing exactly what you want to achieve, but quite probably just transforming things in the vertex shader (read those textures, build a matrix, multiply) will work better and easier.