iPhone 上的 OpenGL 到视频
我目前正在开展一个项目,将物理模拟转换为 iPhone 上的视频。
为此,我目前使用两个不同的循环。第一个循环在 AVAssetWriterInput 对象轮询 EAGLView 以获取更多图像的块中运行。 EAGLView 从存储图像的数组中提供图像。
另一个循环是实际模拟。我已经关闭了模拟计时器,并且每次都以预先指定的时间差自己调用滴答声。每次调用刻度时,我都会在交换缓冲区后在 EAGLView 的交换缓冲区方法中创建一个新图像。然后将该图像放置在 AVAssetWriter 轮询的数组中。
还有一些杂项代码来确保数组不会变得太大
所有这些都可以正常工作,但速度非常非常慢。
从概念上讲,我正在做的事情是否会导致整个过程比实际速度慢?另外,有谁知道从 OpenGL 中获取图像比 glReadPixels 更快的方法吗?
I'm currently working on a project to convert a physics simulation to a video on the iPhone itself.
To do this, I'm presently using two different loops. The first loop runs in the block where the AVAssetWriterInput object polls the EAGLView for more images. The EAGLView provides the images from an array where they are stored.
The other loop is the actual simulation. I've turned off the simulation timer, and am calling the tick myself with a pre-specified time difference every time. Everytime a tick gets called, I create a new image in EAGLView's swap buffers method after the buffers have been swapped. This image is then placed in the array that AVAssetWriter polls.
There is also some miscellaneous code to make sure the array doesn't get too big
All of this works fine, but is very very slow.
Is there something I'm doing that is, conceptually, causing the entire process to be slower than it could be? Also, does anyone know of a faster way to get an image out of Open GL than glReadPixels?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
视频内存的设计使得写入速度快,读取速度慢。这就是我执行纹理渲染的原因。这是我创建的用于将场景渲染为纹理的完整方法(有一些自定义容器,但我认为用您自己的容器替换它们非常简单):
当然,如果您一直在做快照,那么你最好只创建一次纹理帧和渲染缓冲区(并为它们分配内存)。
Video memory is designed so, that it's fast for writing and slow for reading. That's why I perform rendering to texture. Here is the entire method that I've created for rendering the scene to texture (there are some custom containers, but I think it's pretty straightforward to replace them with your own):
Of course, if you're doing snapshots all the time, then you'd better create texture frame and render buffers only once (and allocate memory for them).
要记住的一件事是 GPU 与 CPU 异步运行,因此如果您尝试在完成渲染后立即执行 glReadPixels,则必须等待命令刷新到 GPU 并渲染,然后才能读回它们。
不是同步等待,而是将快照渲染到纹理队列中(使用像 Max 提到的 FBO)。等到渲染完更多帧后,再将之前的帧之一出队。我不知道 iPhone 是否支持栅栏或同步对象,但如果支持,您可以在读取像素之前检查这些内容以查看渲染是否已完成。
One thing to remember is that the GPU is running asynchronously from the CPU, so if you try to do glReadPixels immediately after you finish rendering, you'll have to wait for commands to be flushed to the GPU and rendered before you can read them back.
Instead of waiting synchronously, render snapshots into a queue of textures (using FBOs like Max mentioned). Wait until you've rendered a couple more frames before you deque one of the previous frames. I don't know if the iPhone supports fences or sync objects, but if so you could check those to see if rendering has finished before reading the pixels.
您可以尝试使用 CADisplayLink 对象来确保您的绘图速率和捕获速率与设备的屏幕刷新率相对应。您可能会因为每次设备屏幕刷新刷新和捕获太多次而减慢运行循环的执行时间。
根据您的应用程序的目标,您可能不需要捕获所呈现的每一帧,因此在选择器中,您可以选择是否捕获当前帧。
You could try using a CADisplayLink object to ensure that your drawing rate and your capture rate correspond to the device's screen refresh rate. You might be slowing down the execution time of the run loop by refreshing and capturing too many times per device screen refresh.
Depending on your app's goals, it might not be necessary for you to capture every frame that you present, so in your selector, you could choose whether or not to capture the current frame.
虽然这个问题不是新问题,但还没有得到解答,所以我想我应该介入。glReadPixels
确实非常慢,因此不能用于从 OpenGL 应用程序录制视频而不会对性能产生不利影响。
我们确实找到了解决方法,并创建了一个名为 Everyplay 的免费 SDK,它可以将基于 OpenGL 的图形记录到视频文件中,而不会造成性能损失。您可以在 https://developers.everyplay.com/ 查看
While the question isn't new, it's not answered yet so I thought I'd pitch in.
glReadPixels is indeed very slow, and therefore cannot be used to record video from an OpenGL-application without adversly affecting performance.
We did find a workaround, and have created a free SDK called Everyplay that can record OpenGL-based graphics to a video file, without performance loss. You can check it out at https://developers.everyplay.com/