在 AVCaptureVideoPreviewLayer 顶部绘制一个矩形,可能吗?
这几天我一直在为这个问题苦恼。
我想在 CALayer (AVCaptureVideoPreviewLayer) 顶部绘制一个矩形,它恰好是 iPhone4 上摄像头的视频源。
这是我的设置的一部分;
//(in function for initialization)
-(void)initDevices {
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevicedefaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
captureOutput.minFrameDuration = CMTimeMake(1, 30);
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
[self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
self.prevLayer.frame = CGRectMake(0, 0, 400, 400);
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
self.prevLayer.delegate = self;
[self.view.layer addSublayer: self.prevLayer];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
[self performSelectorOnMainThread:@selector(drawme) withObject:nil waitUntilDone:YES];
}
- (void)drawme {
[self.prevLayer setNeedsDisplay];
}
//delegate function that draws to a CALayer
- (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx {
NSLog(@"hello layer!");
CGContextSetRGBFillColor (ctx, 1, 0, 0, 1);
CGContextFillRect (ctx, CGRectMake (0, 0, 200, 100 ));
}
这可能吗?从我当前的代码中,我得到了“hello层”打印,但相机馈送没有填充矩形。
任何帮助都会很棒。 :)
I've been banging my head on this for a few days now.
I want to draw a rectangle on top of a CALayer (AVCaptureVideoPreviewLayer), which just happens to be the video feed from the camera on an iPhone4.
here's part of my setup;
//(in function for initialization)
-(void)initDevices {
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevicedefaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
captureOutput.minFrameDuration = CMTimeMake(1, 30);
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
[self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
self.prevLayer.frame = CGRectMake(0, 0, 400, 400);
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
self.prevLayer.delegate = self;
[self.view.layer addSublayer: self.prevLayer];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
[self performSelectorOnMainThread:@selector(drawme) withObject:nil waitUntilDone:YES];
}
- (void)drawme {
[self.prevLayer setNeedsDisplay];
}
//delegate function that draws to a CALayer
- (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx {
NSLog(@"hello layer!");
CGContextSetRGBFillColor (ctx, 1, 0, 0, 1);
CGContextFillRect (ctx, CGRectMake (0, 0, 200, 100 ));
}
Is this even possible? From my current code, I get "hello layer" printing, but the camera feed has no filled rectangle.
Any help would be awesome. :)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我认为您应该向 AVCaptureVideoPreviewLayer 添加另一个图层,我为您修改示例代码。你可以尝试一下。
I think you should add another layer to AVCaptureVideoPreviewLayer and I modify the example code for you. You can try it.
或者您可以只插入图像 - 您显然可以使用 AVCaptureVideoPreviewLayer 进行视频捕获,并创建另一个 CALayer() 并使用 layer.insertSublayer(..., 上面:...) 在视频层上方插入“自定义”层,根据习惯,我的意思是另一个 CALayer,可以说
这里有更详细的信息 Swift 说明
Or you can insert just an image - you can use AVCaptureVideoPreviewLayer for video capturing obviously, and create another CALayer() and use layer.insertSublayer(..., above: ...) to insert your "custom" layer above the video layer, and by custom I mean just yet another CALayer with let say
Here's a bit more detailed instructions for Swift