将效果应用于 iPhone 相机预览“视频”

发布于 2024-10-16 05:08:36 字数 1081 浏览 3 评论 0原文

我的目标是编写一个自定义相机视图控制器,该控制器:

  1. 可以使用后置摄像头和前置摄像头(如果可用)以所有四个界面方向拍摄照片。
  2. 正确旋转和缩放预览“视频”以及全分辨率照片。
  3. 允许将(简单)效果应用于预览“视频”和全分辨率照片。

实现(在 iOS 4.2 / Xcode 3.2.5 上):

由于要求(3),我需要下拉到 AVFoundation。

我从 技术问答 QA1702 开始并进行了这些更改:

  1. 将 sessionPreset 更改为 AVCaptureSessionPresetPhoto。
  2. 在启动会话之前添加了 AVCaptureStillImageOutput 作为附加输出。

我遇到的问题是处理预览图像(预览“视频”的一帧)的性能。

首先,我从 captureOutput:didOutputSampleBuffer:fromConnection: 获取示例缓冲区上 imageFromSampleBuffer: 的 UIImage 结果。然后,我使用 CGGraphicsContext 在屏幕上缩放和旋转它。

此时,帧速率已经低于会话视频输出中指定的 15 FPS,当我添加效果时,帧速率降至 10 以下或左右。由于内存不足,应用程序很快崩溃。

我已成功将 iPhone 4 上的帧速率降至 9 FPS,将 iPod Touch(第 4 代)的帧速率降至 8 FPS。

我还添加了一些代码来“刷新”调度队列,但我不确定它实际上有多大帮助。基本上,每 8-10 帧就会设置一个标志,指示 captureOutput:didOutputSampleBuffer:fromConnection: 立即返回而不是处理帧。输出调度队列上的同步操作完成后,该标志将被重置。

在这一点上,我什至不介意低帧速率,但显然我们不能忍受低内存崩溃。任何人都知道如何采取措施来防止这种情况下的内存不足情况(和/或“刷新”调度队列的更好方法)?

My goal is to write a custom camera view controller that:

  1. Can take photos in all four interface orientations with both the back and, when available, front camera.
  2. Properly rotates and scales the preview "video" as well as the full resolution photo.
  3. Allows a (simple) effect to be applied to BOTH the preview "video" and full resolution photo.

Implementation (on iOS 4.2 / Xcode 3.2.5):

Due to requirement (3), I needed to drop down to AVFoundation.

I started with Technical Q&A QA1702 and made these changes:

  1. Changed the sessionPreset to AVCaptureSessionPresetPhoto.
  2. Added an AVCaptureStillImageOutput as an additional output before starting the session.

The issue that I am having is with the performance of processing the preview image (a frame of the preview "video").

First, I get the UIImage result of imageFromSampleBuffer: on the sample buffer from captureOutput:didOutputSampleBuffer:fromConnection:. Then, I scale and rotate it for the screen using a CGGraphicsContext.

At this point, the frame rate is already under the 15 FPS that is specified in the video output of the session and when I add in the effect, it drops to under or around 10. Quickly the app crashes due to low memory.

I have had some success with dropping the frame rate to 9 FPS on the iPhone 4 and 8 FPS on the iPod Touch (4th gen).

I have also added in some code to "flush" the dispatch queue, but I am not sure how much it is actually helping. Basically, every 8-10 frames, a flag is set that signals captureOutput:didOutputSampleBuffer:fromConnection: to return right away rather than process the frame. The flag is reset after a sync operation on the output dispatch queue finishes.

At this point I don't even mind the low frame rates, but obviously we can't ship with the low memory crashes. Anyone have any idea how to take action to prevent the low memory conditions in this case (and/or a better way to "flush" the dispatch queue)?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

轻许诺言 2024-10-23 05:08:36

为了防止内存问题,只需在 captureOutput:didOutputSampleBuffer:fromConnection: 中创建一个自动释放池即可。

这是有道理的,因为 imageFromSampleBuffer: 返回一个自动释放的 UIImage 对象。另外,它会立即释放由图像处理代码创建的任何自动释放的对象。

// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
fromConnection:(AVCaptureConnection *)connection
{ 
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    // Create a UIImage from the sample buffer data
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];

    < Add your code here that uses the image >

    [pool release];
}

我的测试表明,即使请求的 FPS 非常高(例如 60)并且图像处理非常慢(例如 0.5+ 秒),这也将在 iPhone 4 或 iPod Touch(第 4 代)上运行而不会出现内存警告。

旧解决方案

正如 Brad 指出的那样,Apple 建议在后台线程上进行图像处理,以免干扰 UI 响应能力。在这种情况下,我没有注意到太多延迟,但最佳实践就是最佳实践,因此将上述解决方案与自动释放池一起使用,而不是在主调度队列/主线程上运行它。

为了防止内存问题,只需使用主调度队列而不是创建一个新队列。

这也意味着当您想要更新 UI 时,不必在 captureOutput:didOutputSampleBuffer:fromConnection: 中切换到主线程。

setupCaptureSession 中,将 FROM: 更改

// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);

为:

// we want our dispatch to be on the main thread
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

To prevent the memory issues, simply create an autorelease pool in captureOutput:didOutputSampleBuffer:fromConnection:.

This makes sense since imageFromSampleBuffer: returns an autoreleased UIImage object. Plus it frees up any autoreleased objects created by image processing code right away.

// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
fromConnection:(AVCaptureConnection *)connection
{ 
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    // Create a UIImage from the sample buffer data
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];

    < Add your code here that uses the image >

    [pool release];
}

My testing has shown that this will run without memory warnings on an iPhone 4 or iPod Touch (4th gen) even if requested FPS is very high (e.g. 60) and image processing is very slow (e.g. 0.5+ secs).

OLD SOLUTION:

As Brad pointed out, Apple recommends image processing be on a background thread so as to not interfere with the UI responsiveness. I didn't notice much lag in this case, but best practices are best practices, so use the above solution with autorelease pool instead of running this on the main dispatch queue / main thread.

To prevent the memory issues, simply use the main dispatch queue instead of creating a new one.

This also means that you don't have to switch to the main thread in captureOutput:didOutputSampleBuffer:fromConnection: when you want to update the UI.

In setupCaptureSession, change FROM:

// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);

TO:

// we want our dispatch to be on the main thread
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
寒尘 2024-10-23 05:08:36

从根本上来说更好的方法是使用 OpenGL 为您处理尽可能多的与图像相关的繁重工作(正如我看到您在 您的最新尝试)。然而,即使如此,您在构建要处理的帧时也可能会遇到问题。

虽然在处理帧时遇到内存积累似乎很奇怪(根据我的经验,如果处理速度不够快,您就会停止获取它们),但如果 Grand Central Dispatch 队列正在等待,它们可能会被堵塞输入/输出。

也许调度信号量可以让您限制向处理队列添加新项目。有关这方面的更多信息,我强烈推荐 Mike Ash 的“

A fundamentally better approach would be to use OpenGL to handle as much of the image-related heavy lifting for you (as I see you're trying in your latest attempt). However, even then you might have issues with building up frames to be processed.

While it seems strange that you'd be running into memory accumulation when processing frames (in my experience, you just stop getting them if you can't process them fast enough), Grand Central Dispatch queues can get jammed up if they are waiting on I/O.

Perhaps a dispatch semaphore would let you throttle the addition of new items to the processing queues. For more on this, I highly recommend Mike Ash's "GCD Practicum" article, where he looks at optimizing an I/O bound thumbnail processing operation using dispatch semaphores.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文