访问 iPhone 视频输出图像缓冲区时 FPS 较低
我正在尝试在 iPhone 上进行一些图像处理。我正在使用 http://developer.apple.com/library/ ios/#qa/qa2010/qa1702.html 捕获相机帧。
我的问题是,当我尝试访问捕获的缓冲区时,相机 FPS 从 30 下降到大约 20。有人知道我该如何修复它吗?
我使用 kCVPixelFormatType_32BGRA 格式中我能找到的最低捕获质量 (AVCaptureSessionPresetLow = 192x144)。如果有人知道我可以使用较低质量的产品,我愿意尝试。
当我在其他平台(如 Symbian)上进行相同的图像访问时,它工作正常。
这是我的代码:
#pragma mark -
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
/*We create an autorelease pool because as we are not in the main_queue our code is
not executed in the main thread. So we have to create an autorelease pool for the thread we are in*/
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the image buffer
if (CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess)
{
// calculate FPS and display it using main thread
[self performSelectorOnMainThread:@selector(updateFps:) withObject: (id) nil waitUntilDone:NO];
UInt8 *base = (UInt8 *)CVPixelBufferGetBaseAddress(imageBuffer); //image buffer start address
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
int size = (height*width);
UInt8* pRGBtmp = m_pRGBimage;
/*
Here is the problem; m_pRGBimage is RGB image I want to process.
In the 'for' loop I convert the image from BGRA to RGB. As a resault, the FPS drops to 20.
*/
for (int i=0;i<size;i++)
{
pRGBtmp[0] = base[2];
pRGBtmp[1] = base[1];
pRGBtmp[2] = base[0];
base = base+4;
pRGBtmp = pRGBtmp+3;
}
// Display received action
[self performSelectorOnMainThread:@selector(displayAction:) withObject: (id) nil waitUntilDone:NO];
//[self displayAction:&eyePlayOutput];
//saveFrame( imageBuffer );
//unlock the image buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
[pool drain];
}
作为答案的后续内容,我需要实时处理正在显示的图像。
我注意到,当我使用 AVCaptureSessionPresetHigh 时,我做的最简单的事情就是:
for (int i=0;i<size;i++)
x = base[0];
导致帧速率下降到 4-5 FPS。我猜这是因为该尺寸的图像没有被缓存。
基本上我需要 96x48 图像。有没有一种简单的方法来缩小相机输出图像的比例,一种使用硬件加速的方法,这样我就可以使用小图像了?
I'm trying to do some image processing on iPhone. I'm using http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html to capture the camera frames.
My problem is that when I'm trying to access the captured buffer, the camera FPS drops from 30 to about 20. Does anybody knows how I can fix it?
I use the lowest capture quality I could find (AVCaptureSessionPresetLow = 192x144) in kCVPixelFormatType_32BGRA format. If anybody knows a lower quality I could use, I'm willing to try it.
When I do the same image access on other platforms, like Symbian, it works OK.
Here is my code:
#pragma mark -
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
/*We create an autorelease pool because as we are not in the main_queue our code is
not executed in the main thread. So we have to create an autorelease pool for the thread we are in*/
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the image buffer
if (CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess)
{
// calculate FPS and display it using main thread
[self performSelectorOnMainThread:@selector(updateFps:) withObject: (id) nil waitUntilDone:NO];
UInt8 *base = (UInt8 *)CVPixelBufferGetBaseAddress(imageBuffer); //image buffer start address
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
int size = (height*width);
UInt8* pRGBtmp = m_pRGBimage;
/*
Here is the problem; m_pRGBimage is RGB image I want to process.
In the 'for' loop I convert the image from BGRA to RGB. As a resault, the FPS drops to 20.
*/
for (int i=0;i<size;i++)
{
pRGBtmp[0] = base[2];
pRGBtmp[1] = base[1];
pRGBtmp[2] = base[0];
base = base+4;
pRGBtmp = pRGBtmp+3;
}
// Display received action
[self performSelectorOnMainThread:@selector(displayAction:) withObject: (id) nil waitUntilDone:NO];
//[self displayAction:&eyePlayOutput];
//saveFrame( imageBuffer );
//unlock the image buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
[pool drain];
}
As a follow-on to the answers, I need to process the image in realtime, it is being displayed.
I noticed that when I use AVCaptureSessionPresetHigh, the most simple thing I do, like:
for (int i=0;i<size;i++)
x = base[0];
causes the framerate to drop to 4-5 FPS. I guess its because an image in that size is not cached.
Basically I need 96x48 image. is there a simple way to downscale the camera output image, a way that uses hardware acceleration, so I could work with the small one?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
除了最快的 iOS 设备之外,任何对图像中每个像素进行迭代的操作都会相当慢。例如,我通过简单的每像素颜色测试对 640 x 480 视频帧(307,200 像素)中的每个像素进行了基准迭代,发现这在 iPhone 4 上仅以 4 FPS 左右运行。
您正在查看处理 27,648像素,它的运行速度应该足以在 iPhone 4 上达到 30 FPS,但它的处理器比原始 iPhone 和 iPhone 3G 中的处理器快得多。 iPhone 3G 可能仍会难以应对这种处理负载。您也没有说明您的 Symbian 设备中的处理器有多快。
我建议重新设计您的处理算法以避免色彩空间转换。不需要为了处理颜色分量而重新排序它们。
此外,您可以通过在图像的行和列内以特定间隔进行采样来选择性地仅处理几个像素。
最后,如果您的目标是支持 OpenGL ES 2.0 的较新 iOS 设备(iPhone 3GS 及更高版本),您可能需要考虑使用 GLSL 片段着色器完全在 GPU 上处理视频帧。我在此处描述了该过程 ,以及基于颜色的实时对象跟踪的示例代码。在我的基准测试中,GPU 处理此类处理的速度比 CPU 快 14 - 28 倍。
Anything that iterates over every pixel in an image will be fairly slow on all but the fastest iOS devices. For example, I benchmarked iterating over every pixel in a 640 x 480 video frame (307,200 pixels) with a simple per-pixel color test and found that this only runs at around 4 FPS on an iPhone 4.
You're looking at processing 27,648 pixels in your case, which should run fast enough to hit 30 FPS on an iPhone 4, but that's a much faster processor than what was in the original iPhone and iPhone 3G. The iPhone 3G will probably still struggle with this processing load. You also don't say how fast the processor was in your Symbian devices.
I'd suggest reworking your processing algorithm to avoid the colorspace conversion. There should be no need to reorder the color components in order to process them.
Additionally, you could selectively process only a few pixels by sampling at certain intervals within the rows and columns of the image.
Finally, if you are targeting the newer iOS devices that have support for OpenGL ES 2.0 (iPhone 3G S and newer), you might want to look at using a GLSL fragment shader to process the video frame entirely on the GPU. I describe the process here, along with sample code for realtime color-based object tracking. The GPU can handle this kind of processing 14 - 28 times faster than the CPU, in my benchmarks.
免责声明:这个答案是一个猜测:)
当缓冲区被锁定时,你做了很多工作;这是否阻碍了从相机捕获图像的线程?
您可以在处理数据时将数据从缓冲区中复制出来,这样您就可以尽快解锁它,例如,
如果是阻止捕获的锁,那么这应该会有所帮助。
注意:如果您知道所有缓冲区的大小相同,则可以加快速度,您只需调用一次 malloc 即可获取内存,然后每次重用它,并且仅在处理完所有缓冲区后才释放它。
或者,如果这不是问题,您可以尝试降低该线程的优先级
disclaimer: THIS ANSWER IS A GUESS :)
You're doing quite a lot of work while the buffer is locked; is this holding up the thread that is capturing the image from the camera?
You could you copy the data out of the buffer while you work on it so you can unlock it asap i.e. something like
If it's the lock that's holding up the capture then this should help.
NB You could speed this up if you know all the buffers will be the same size you can just call malloc once to get the memory and then just reuse it each time and only free it when you have finished processing all the buffers.
Or if that's not the problem you could try lowering the priority of this thread
将相机帧的内容复制到专用缓冲区中并从那里对其进行操作。根据我的经验,这导致速度大幅提高。我最好的猜测是相机框架所在的内存区域具有特殊的保护,导致读/写访问变慢。
查看相机帧数据的内存地址。在我的设备上,相机缓冲区位于
0x63ac000
。这对我来说没有任何意义,除了其他堆对象位于更接近0x1300000
的地址中。锁定建议并没有解决我的速度下降问题,但 memcpy 却解决了。Copy the contents of the camera frame into a dedicated buffer and operate on it from there. This results in a massive speed improvement in my experience. My best guess is that the region of memory where the camera frame is located has special protections that make reading/writing accesses slow.
Check out the memory address of the camera frame data. On my device the camera buffer is at
0x63ac000
. That doesn't mean anything to me, except that the other heap objects are in addresses closer to0x1300000
. The lock suggestion did not solve my slowdown, but thememcpy
did.