AVCaptureSession 指定捕获图像 obj-c iphone 应用程序的分辨率和质量
您好,我想设置 AV 捕获会话以使用 iPhone 相机捕获具有特定分辨率(并且,如果可能,具有特定质量)的图像。这是设置 AV
会话代码
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
self.captureSession = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
captureSession.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
NSArray *cameras=[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *device;
if ([UserDefaults camera]==UIImagePickerControllerCameraDeviceFront)
{
device =[cameras objectAtIndex:1];
}
else
{
device = [cameras objectAtIndex:0];
};
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input)
{
NSLog(@"PANIC: no media input");
}
[captureSession addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[captureSession addOutput:output];
NSLog(@"connections: %@", output.connections);
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// Assign session to an ivar.
[self setSession:captureSession];
[self.captureSession startRunning];
}
和 setSession
:
-(void)setSession:(AVCaptureSession *)session
{
NSLog(@"setting session...");
self.captureSession=session;
NSLog(@"setting camera view");
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
//UIView *aView = self.view;
CGRect videoRect = CGRectMake(20.0, 20.0, 280.0, 255.0);
previewLayer.frame = videoRect; // Assume you want the preview layer to fill the view.
[previewLayer setBackgroundColor:[[UIColor grayColor] CGColor]];
[self.view.layer addSublayer:previewLayer];
//[aView.layer addSublayer:previewLayer];
}
以及输出方法:
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
//NSLog(@"captureOutput: didOutputSampleBufferFromConnection");
// Create a UIImage from the sample buffer data
self.currentImage = [self imageFromSampleBuffer:sampleBuffer];
//< Add your code here that uses the image >
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
//NSLog(@"imageFromSampleBuffer: called");
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
一切都非常标准。但是我应该在哪里更改以及更改什么来指定捕获图像的分辨率及其质量。请帮助我
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
请参阅Apple 指南“捕捉静态图像”部分,了解设置其中一项或多项后您将获得的尺寸预设。
您应该更改的参数是
captureSession.sessionPreset
,其类型为AVCaptureSession.Preset
。Refer to Apple's guide Capturing Still Images section regarding which sizes you'll get if you set one or another preset.
The parameter you should change is
captureSession.sessionPreset
, which has a type ofAVCaptureSession.Preset
.尝试使用类似的方法,其中 cx 和 cy 是您的自定义分辨率:
Try to go with something like this where cx and cy are your custom resolutions: