iPhone 相机访问权限?

发布于 2024-09-15 15:22:23 字数 148 浏览 8 评论 0原文

我想知道如何访问 iPhone 摄像头并实时使用它:例如,仅在摄像头视图上绘图。

另一个相关问题:

我可以像 Mac 上的“Photo Booth”一样一次显示 4 个相机视图吗?

I want to know how to access the iphones camera and work with it in realtime: for example just draw on the camera view.

Another related Question:

Can I display 4 camera-views at once like in "Photo Booth" on the Mac.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

最偏执的依靠 2024-09-22 15:22:23

您可以使用 AVFoundation 来完成它,

- (void)initCapture {

    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];

    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];

    captureOutput.alwaysDiscardsLateVideoFrames = YES; 

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 


    self.captureSession = [[AVCaptureSession alloc] init];
    [self.captureSession setSessionPreset:AVCaptureSessionPresetLow];

    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];

    [self.captureSession startRunning];

    self.customLayer = [CALayer layer];

    self.customLayer.frame =CGRectMake(5-25,25, 200,150);

    self.customLayer.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);

    //self.customLayer.transform =CATransform3DMakeRotation(M_PI/2.0f, 0, 0, 1);


    //[self.view.layer addSublayer:imageView.layer];
    //self.customLayer.frame =CGRectMake(0, 0, 200,150);
    //self.customLayer.contentsGravity = kCAGravityResizeAspectFill;

    [self.view.layer insertSublayer:self.customLayer atIndex:4];
    //[self.view.layer addSublayer:self.customLayer];


    self.customLayer1 = [CALayer layer];
    //self.customLayer.frame = self.view.bounds;
    self.customLayer1.frame =CGRectMake(165-25, 25, 200, 150);
    self.customLayer1.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
    //self.customLayer1.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:self.customLayer1];




    self.customLayer2 = [CALayer layer];
    //self.customLayer.frame = self.view.bounds;
    self.customLayer2.frame =CGRectMake(5-25, 210 +25, 200, 150);
    self.customLayer2.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
    //self.customLayer1.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:self.customLayer2];


    self.customLayer3 = [CALayer layer];
    //self.customLayer.frame = self.view.bounds;
    self.customLayer3.frame =CGRectMake(165-25, 210 +25, 200, 150);
    self.customLayer3.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
    //self.customLayer1.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:self.customLayer3];



}



#pragma mark -
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection 
{ 


    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    /*Lock the image buffer*/
    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    /*Get information about the image*/
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer);  


    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 



    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage2 = CGBitmapContextCreateImage(newContext); 
    /*We release some components*/
    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace);

    [self.customLayer performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage2 waitUntilDone:YES];
    [self.customLayer1 performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage2 waitUntilDone:YES];
    [self.customLayer2 performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage2 waitUntilDone:YES];
    [self.customLayer3 performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage2 waitUntilDone:YES];


    //  UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];


    /*We relase the CGImageRef*/
    CGImageRelease(newImage2);

    //  [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];

    /*We unlock the  image buffer*/
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    [pool drain];

} 

它工作得很好。

http://crayoncoding.blogspot.com/2011/04/iphone-4-camera-views-at-once.html

请参阅上面的链接以获取详细代码

You can do it by using AVFoundation

- (void)initCapture {

    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];

    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];

    captureOutput.alwaysDiscardsLateVideoFrames = YES; 

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 


    self.captureSession = [[AVCaptureSession alloc] init];
    [self.captureSession setSessionPreset:AVCaptureSessionPresetLow];

    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];

    [self.captureSession startRunning];

    self.customLayer = [CALayer layer];

    self.customLayer.frame =CGRectMake(5-25,25, 200,150);

    self.customLayer.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);

    //self.customLayer.transform =CATransform3DMakeRotation(M_PI/2.0f, 0, 0, 1);


    //[self.view.layer addSublayer:imageView.layer];
    //self.customLayer.frame =CGRectMake(0, 0, 200,150);
    //self.customLayer.contentsGravity = kCAGravityResizeAspectFill;

    [self.view.layer insertSublayer:self.customLayer atIndex:4];
    //[self.view.layer addSublayer:self.customLayer];


    self.customLayer1 = [CALayer layer];
    //self.customLayer.frame = self.view.bounds;
    self.customLayer1.frame =CGRectMake(165-25, 25, 200, 150);
    self.customLayer1.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
    //self.customLayer1.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:self.customLayer1];




    self.customLayer2 = [CALayer layer];
    //self.customLayer.frame = self.view.bounds;
    self.customLayer2.frame =CGRectMake(5-25, 210 +25, 200, 150);
    self.customLayer2.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
    //self.customLayer1.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:self.customLayer2];


    self.customLayer3 = [CALayer layer];
    //self.customLayer.frame = self.view.bounds;
    self.customLayer3.frame =CGRectMake(165-25, 210 +25, 200, 150);
    self.customLayer3.transform = CATransform3DRotate(CATransform3DIdentity, M_PI/2.0f, 0, 0, 1);
    //self.customLayer1.contentsGravity = kCAGravityResizeAspectFill;
    [self.view.layer addSublayer:self.customLayer3];



}



#pragma mark -
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection 
{ 


    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    /*Lock the image buffer*/
    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    /*Get information about the image*/
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer);  


    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 



    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage2 = CGBitmapContextCreateImage(newContext); 
    /*We release some components*/
    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace);

    [self.customLayer performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage2 waitUntilDone:YES];
    [self.customLayer1 performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage2 waitUntilDone:YES];
    [self.customLayer2 performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage2 waitUntilDone:YES];
    [self.customLayer3 performSelectorOnMainThread:@selector(setContents:) withObject: (id) newImage2 waitUntilDone:YES];


    //  UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];


    /*We relase the CGImageRef*/
    CGImageRelease(newImage2);

    //  [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];

    /*We unlock the  image buffer*/
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    [pool drain];

} 

it works fine..

http://crayoncoding.blogspot.com/2011/04/iphone-4-camera-views-at-once.html

see the above link for detail code

梦境 2024-09-22 15:22:23

您可以尝试使用 4 个 UIImagePickerController。不确定它是否有效,但值得一试。

使用 iPhone SDK 访问相机

You can try having 4 UIImagePickerControllers. not sure if it'll work but its worth a shot.

Access the camera with iPhone SDK

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文