基本上,我有一组数据(流体模拟数据),这些数据是根据用户输入(在系统内存中启动)每帧实时生成的。我想将流体的密度作为 alpha 值写入纹理 - 我对数组值进行插值以生成屏幕大小的数组(网格相对较小)并将其映射到 0 - 255 范围。将这些值写入纹理以供使用的最有效方法(ogl 函数)是什么?
其他地方已经建议的东西,我认为我不想使用(如果我弄错了,请告诉我):
-
glDrawPixels() - 我的印象是这会导致每次调用它时都会中断,这会使其变慢,尤其是在高分辨率下。
-
使用着色器 - 我认为着色器无法接受和处理每帧数组中的数据量(其他地方提到过它们可以接受的数据量上限太低)
Basically, I have an array of data (fluid simulation data) which is generated per-frame in real-time from user input (starts in system ram). I want to write the density of the fluid to a texture as an alpha value - I interpolate the array values to result in an array the size of the screen (the grid is relatively small) and map it to a 0 - 255 range. What is the most efficient way (ogl function) to write these values into a texture for use?
Things that have been suggested elsewhere, which I don't think I want to use (please, let me know if I've got it wrong):
-
glDrawPixels() - I'm under the impression that this will cause an interrupt each time I call it, which would make it slow, particularly at high resolutions.
-
Use a shader - I don't think that a shader can accept and process the volume of data in the array each frame (It was mentioned elsewhere that the cap on the amount of data they may accept is too low)
发布评论
评论(3)
如果我正确理解你的问题,这两种解决方案都使问题变得过于复杂。我是否正确地认为您已经生成了一个大小为 x*y 的数组,其中 x 和 y 是您的屏幕分辨率,并填充了无符号字节?
如果是这样,如果您想要一个使用此数据作为其 Alpha 通道的 OpenGL 纹理,为什么不直接创建一个纹理,将其绑定到 GL_TEXTURE_2D 并调用 glTexImage2D 与您的数据,使用 GL_ALPHA 作为格式和内部格式,GL_UNSIGNED_BYTE 作为类型和 (x,y) 作为大小?
If I understand your problem correctly, both solutions are over-complicating the issue. Am I correct in thinking you've already generated an array of size x*y where x and y are your screen resolution, filled with unsigned bytes ?
If so, if you want an OpenGL texture that uses this data as its alpha channel, why not just create a texture, bind it to GL_TEXTURE_2D and call glTexImage2D with your data, using GL_ALPHA as the format and internal format, GL_UNSIGNED_BYTE as the type and (x,y) as the size ?
是什么让您认为着色器表现不佳?着色器的整个想法是非常非常快地处理大量数据。请使用 Google 搜索短语“通用 GPU 计算”或“GPGPU”。
着色器只能从缓冲区收集数据,而不能分散数据。但他们能做的就是改变缓冲区中的值。这允许(片段)着色器写入 *GL_POINT* 的位置,然后将其依次放置在纹理的目标像素上。 Shader Model 3 及更高版本的 GPU 还可以从几何和顶点着色器阶段访问纹理采样器,因此片段着色器部分变得非常简单。
如果您只有位置和值的线性流,只需通过顶点数组将它们发送到 OpenGL,绘制 *GL_POINT*s,目标纹理是帧缓冲区对象的颜色附件。
What makes you think a shader would perfom bad? The whole idea of shaders is about processing huge amounts of data very, very fast. Please use Google on the search phrase "General Purpose GPU computing" or "GPGPU".
Shaders can only gather data from buffers, not scatter. But what they can do is change values in the buffers. This allows for a (fragment) shader to write the locations of *GL_POINT*s, which are then in turn placed on the target pixels of the texture. Shader Model 3 and later GPUs can also access texture samplers from the geometry and vertex shader stages, so the fragment shader part gets really simple then.
If you just have a linear stream of positions and values, just send those to OpenGL through a Vertex Array, drawing *GL_POINT*s, with your target texture being a color attachment for a framebuffer object.
一个好方法是尽量避免任何不必要的额外副本。因此,您可以使用像素缓冲区对象,您映射到您的地址空间,并使用它直接生成您的数据。
由于您希望每帧更新此数据,因此您还希望寻找高效的缓冲区对象流,这样您就不会强制 CPU 和 GPU 之间进行隐式同步。在您的场景中,一种简单的方法是使用 3 个 PBO 的环形缓冲区,每帧都会前进。
好吧,驱动程序所做的事情完全是特定于实现的。我不认为“每次都会造成中断”在这里是一个有用的心理形象。您似乎完全低估了 GL 实现在您背后所做的工作。 GL 调用不会对应于发送到 GPU 的某些命令。
但不使用glDrawPixels仍然是一个不错的选择。它效率不高,并且已被弃用并从现代 GL 中删除。
你完全错了。没有办法不使用着色器。如果您不自己编写着色器(例如,通过使用 GL 的旧“固定功能管道”),GPU 驱动程序将为您提供着色器。这些早期固定功能阶段的硬件实现已完全被可编程单元所取代 - 因此,如果您无法使用着色器来实现,那么您也无法使用 GPU 来实现。我强烈建议您编写自己的着色器(无论如何,这是现代 GL 中唯一的选择)。
A good way would be to try to avoid any unnecessary extra copies. So you could use Pixel Buffer Objects which you map to your address space, and use that to directly generate your data into.
Since you want to update this data per frame, you also want to look for efficient buffer object streaming, so that you don't force implicit synchronizations between the CPU and GPU. An easy way to do that in your scenario would be using a ring buffer of 3 PBOs, which you advance every frame.
Well, what the driver does is totally implementation-specific. I don't think that the "cause an interrupt each time" is a useful mental image here. You seem to completely underestimate the work the GL implementation will be doing behind your back. A GL call will not correspond to some command which is sent to the GPU.
But not using
glDrawPixels
is still a good choice. It is not very efficient, and it has been deprecated and removed from modern GL.You got this totally wrong. There is no way to not use a shader. If you're not writing one yourself (e.g. by using old "fixed-function pipeline" of the GL), the GPU driver will provide the shader for you. The hardware implementation for these earlier fixed function stages has been completely superseeded by programmable units - so if you can't do it with shaders, you can't do it with the GPU. And I would strongly recommend to write your own shader (it is the only option in modern GL, anyway).