视频的 GPU 内存分配
是否可以在没有cuda的情况下在GPU上分配一些内存?
我正在添加更多细节... 我需要从VLC解码视频帧并对视频进行一些合成功能;我使用新的 SDL 渲染功能来实现这一点。 一切工作正常,直到我必须将解码的数据发送到 sdl 纹理...这部分代码是由标准 malloc 处理的,这对于视频操作来说很慢。
现在我什至不确定使用 GPU 视频是否真的对我有帮助
Is it possible to allocate some memory on the GPU without cuda?
i'm adding some more details...
i need to get the video frame decoded from VLC and have some compositing functions on the video; I'm doing so using the new SDL rendering capabilities.
All works fine until i have to send the decoded data to the sdl texture... that part of code is handled by standard malloc which is slow for video operations.
Right now i'm not even sure that using gpu video will actually help me
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
让我们明确一点:您是否正在尝试完成实时视频处理?由于您的最新更新极大地改变了问题,所以我添加了另一个答案。
您遇到的“缓慢”可能是由多种原因造成的。为了获得“实时”效果(在感知意义上),您必须能够在 33 毫秒内处理帧并显示它(大约对于 30fps 的视频)。这意味着您必须解码该帧,在其上运行合成函数(如您所调用的),并在此时间范围内将其显示在屏幕上。
如果合成函数过于占用 CPU 资源,那么您可以考虑编写 GPU 程序来加速此任务。但是,您应该做的第一件事是准确确定应用程序的瓶颈在哪里。您可以暂时剥离您的应用程序,让它解码帧并将其显示在屏幕上(不执行合成功能),只是看看它如何进行。如果速度慢,则解码过程可能使用过多的 CPU/RAM 资源(也许是您这边的错误?)。
我曾经在一个类似的项目中使用过FFMPEG 和 SDL,并且对结果非常满意。 本教程展示了如何制作一个基本的视频播放器 em> 使用这两个库。基本上,它打开一个视频文件,解码帧并将它们渲染在表面上以进行显示。
Let's be clear: are you are trying to accomplish real time video processing? Since your latest update changed the problem considerably, I'm adding another answer.
The "slowness" you are experiencing could be due to several reasons. In order get the "real-time" effect (in the perceptual sense), you must be able to process the frame and display it withing 33ms (approximately, for a 30fps video). This means you must decode the frame, run the compositing functions (as you call) on it, and display it on the screen within this time frame.
If the compositing functions are too CPU intensive, then you might consider writing a GPU program to speed up this task. But the first thing you should do is determine where the bottleneck of your application is exactly. You could strip your application momentarily to let it decode the frames and display them on the screen (do not execute the compositing functions), just to see how it goes. If its slow, then the decoding process could be using too much CPU/RAM resources (maybe a bug on your side?).
I have used FFMPEG and SDL for a similar project once and I was very happy with the result. This tutorial shows to do a basic video player using both libraries. Basically, it opens a video file, decodes the frames and renders them on a surface for displaying.
您可以通过 Direct3D 11 计算着色器 或 OpenCL。这些在精神上与 CUDA 相似。
You can do this via Direct3D 11 Compute Shaders or OpenCL. These are similar in spirit to CUDA.
是的。您可以通过 OpenGL 纹理在 GPU 中分配内存。
Yes, it is. You can allocate memory in the GPU through OpenGL textures.
仅通过图形框架间接实现。
您可以使用几乎每台计算机都支持的 OpenGL。
您可以使用顶点缓冲区来存储数据。顶点缓冲区通常用于存储渲染点,但您可以轻松地使用它来存储任何类型的数组。与纹理不同,它们的容量仅受可用图形内存量的限制。
http://www.songho.ca/opengl/gl_vbo.html 有一个很好的教程关于如何读取和写入数据到顶点缓冲区,您可以忽略有关绘制顶点缓冲区的所有内容。
Only indirectly through a graphics framework.
You can use OpenGL which is supported by virtually every computer.
You could use a vertex buffer to store your data. Vertex buffers are usually used to store points for rendering, but you can easily use it to store an array of any kind. Unlike textures, their capacity is only limited by the amount of graphics memory available.
http://www.songho.ca/opengl/gl_vbo.html has a good tutorial on how to read and write data to vertex buffers, you can ignore everything about drawing the vertex buffer.