如何使用 iOS 5 检测视频会话中的面孔

发布于 2024-12-12 22:25:34 字数 6233 浏览 6 评论 0原文

在我的应用程序中,我正在使用可以拍照的视频会话。不过我想在这个视频会议中进行面部检测。我查看了苹果的示例“SquareCam”,这正是我正在寻找的。然而,在我的项目中实现他们的代码却让我抓狂。

#import "CaptureSessionManager.h"
#import <ImageIO/ImageIO.h>

@implementation CaptureSessionManager

@synthesize captureSession;
@synthesize previewLayer;
@synthesize stillImageOutput;
@synthesize stillImage;

#pragma mark Capture Session Configuration

- (id)init {
    if ((self = [super init])) {
        [self setCaptureSession:[[AVCaptureSession alloc] init]];
    }
    return self;
}
- (void)didReceiveMemoryWarning
{
    // Releases the view if it doesn't have a superview.
    NSLog(@"memorywarning");

    // Release any cached data, images, etc that aren't in use.
}
- (void)addVideoPreviewLayer {
    [self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
    [[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];

}

- (void)addVideoInput {
    AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];   
    if ([videoDevice isFocusModeSupported:AVCaptureFocusModeLocked]) {

        NSError *error = nil;

        if ([videoDevice lockForConfiguration:&error]) {
            NSLog(@"focus");
            videoDevice.focusMode = AVCaptureFocusModeLocked;
            videoDevice.focusMode = AVCaptureFocusModeContinuousAutoFocus;
            //videoDevice.focusMode = AVCaptureSessionPresetPhoto

            videoDevice.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
            [captureSession setSessionPreset:AVCaptureSessionPresetPhoto];

            [videoDevice unlockForConfiguration];

        }

        else {

        }
    }

            // Respond to the failure as appropriate.
    if (videoDevice) {
        NSError *error;
        AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
        if (!error) {
            if ([[self captureSession] canAddInput:videoIn])
                [[self captureSession] addInput:videoIn];
            else
                NSLog(@"Couldn't add video input");     
        }
        else
            NSLog(@"Couldn't create video input");
    }
    else
        NSLog(@"Couldn't create video capture device");
}

- (void)addStillImageOutput 
{
  [self setStillImageOutput:[[[AVCaptureStillImageOutput alloc] init] autorelease]];
  NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
  [[self stillImageOutput] setOutputSettings:outputSettings];

  AVCaptureConnection *videoConnection = nil;
  for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
    for (AVCaptureInputPort *port in [connection inputPorts]) {
      if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
        videoConnection = connection;
        break;
      }
    }
    if (videoConnection) { 
      break; 
    }
  }

  [[self captureSession] addOutput:[self stillImageOutput]];
    [outputSettings release];
}

- (void)captureStillImage
{  
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { 
      break; 
    }
    }

    NSLog(@"about to request a capture from: %@", [self stillImageOutput]);
    [[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection 
                                                       completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) { 
                                                         CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
                                                         if (exifAttachments) {
                                                           NSLog(@"attachements: %@", exifAttachments);
                                                         } else { 
                                                           NSLog(@"no attachments");
                                                         }
                                                         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];

                                                         UIImage *image = [[UIImage alloc] initWithData:imageData];

                                                         [self setStillImage:image];
                                                         [image release];
                                                         [[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
                                                       }];
}


- (void)dealloc {

    [[self captureSession] stopRunning];

    [previewLayer release], previewLayer = nil;
    [captureSession release], captureSession = nil;
    [stillImageOutput release], stillImageOutput = nil;
    [stillImage release], stillImage = nil;

    [super dealloc];
}

@end

除了我的视频会话之外,我还成功地识别了导入到项目中的 UIImage 中的面孔。我用 @Abhinav Jha 的例子(如何在iOS 5人脸检测API中正确实例化CIDetector类对象)。

CIImage *ciImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"Photo.JPG"]];
if (ciImage == nil)

[imageView setImage:[UIImage imageNamed:@"Photo.JPG"]];

NSDictionary *options = [[NSDictionary alloc] initWithObjectsAndKeys:
                         @"CIDetectorAccuracy", @"CIDetectorAccuracyHigh",nil];
CIDetector *ciDetector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                            context:nil
                                            options:options];
NSArray *features = [ciDetector featuresInImage:ciImage];
NSLog(@"no of face detected: %d", [features count]);

希望有人能结合这两个例子为我指出正确的方向!

Within my App I'm using a videosession where I can take pictures with. However I'd like to have face detection within this videosession. I looked at Apple's example "SquareCam" which is exactly what I'm looking for. However implementing their code in my project is driving me bonkers.

#import "CaptureSessionManager.h"
#import <ImageIO/ImageIO.h>

@implementation CaptureSessionManager

@synthesize captureSession;
@synthesize previewLayer;
@synthesize stillImageOutput;
@synthesize stillImage;

#pragma mark Capture Session Configuration

- (id)init {
    if ((self = [super init])) {
        [self setCaptureSession:[[AVCaptureSession alloc] init]];
    }
    return self;
}
- (void)didReceiveMemoryWarning
{
    // Releases the view if it doesn't have a superview.
    NSLog(@"memorywarning");

    // Release any cached data, images, etc that aren't in use.
}
- (void)addVideoPreviewLayer {
    [self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
    [[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];

}

- (void)addVideoInput {
    AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];   
    if ([videoDevice isFocusModeSupported:AVCaptureFocusModeLocked]) {

        NSError *error = nil;

        if ([videoDevice lockForConfiguration:&error]) {
            NSLog(@"focus");
            videoDevice.focusMode = AVCaptureFocusModeLocked;
            videoDevice.focusMode = AVCaptureFocusModeContinuousAutoFocus;
            //videoDevice.focusMode = AVCaptureSessionPresetPhoto

            videoDevice.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
            [captureSession setSessionPreset:AVCaptureSessionPresetPhoto];

            [videoDevice unlockForConfiguration];

        }

        else {

        }
    }

            // Respond to the failure as appropriate.
    if (videoDevice) {
        NSError *error;
        AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
        if (!error) {
            if ([[self captureSession] canAddInput:videoIn])
                [[self captureSession] addInput:videoIn];
            else
                NSLog(@"Couldn't add video input");     
        }
        else
            NSLog(@"Couldn't create video input");
    }
    else
        NSLog(@"Couldn't create video capture device");
}

- (void)addStillImageOutput 
{
  [self setStillImageOutput:[[[AVCaptureStillImageOutput alloc] init] autorelease]];
  NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
  [[self stillImageOutput] setOutputSettings:outputSettings];

  AVCaptureConnection *videoConnection = nil;
  for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
    for (AVCaptureInputPort *port in [connection inputPorts]) {
      if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
        videoConnection = connection;
        break;
      }
    }
    if (videoConnection) { 
      break; 
    }
  }

  [[self captureSession] addOutput:[self stillImageOutput]];
    [outputSettings release];
}

- (void)captureStillImage
{  
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { 
      break; 
    }
    }

    NSLog(@"about to request a capture from: %@", [self stillImageOutput]);
    [[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection 
                                                       completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) { 
                                                         CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
                                                         if (exifAttachments) {
                                                           NSLog(@"attachements: %@", exifAttachments);
                                                         } else { 
                                                           NSLog(@"no attachments");
                                                         }
                                                         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];

                                                         UIImage *image = [[UIImage alloc] initWithData:imageData];

                                                         [self setStillImage:image];
                                                         [image release];
                                                         [[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
                                                       }];
}


- (void)dealloc {

    [[self captureSession] stopRunning];

    [previewLayer release], previewLayer = nil;
    [captureSession release], captureSession = nil;
    [stillImageOutput release], stillImageOutput = nil;
    [stillImage release], stillImage = nil;

    [super dealloc];
}

@end

Besides my videosession I did succeeded with recognizing faces within a UIImage which I imported inside my project. I did this with the example of @Abhinav Jha (How to properly instantiate CIDetector class object in iOS 5 face detection API).

CIImage *ciImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"Photo.JPG"]];
if (ciImage == nil)

[imageView setImage:[UIImage imageNamed:@"Photo.JPG"]];

NSDictionary *options = [[NSDictionary alloc] initWithObjectsAndKeys:
                         @"CIDetectorAccuracy", @"CIDetectorAccuracyHigh",nil];
CIDetector *ciDetector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                            context:nil
                                            options:options];
NSArray *features = [ciDetector featuresInImage:ciImage];
NSLog(@"no of face detected: %d", [features count]);

Hope someone can point me in the right direction with combining the two examples!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文