AVCaptureSession 输出样本缓冲区保存到 CoreData
我正在使用 AVCaptureSession 使用 AVCaptureVideoDataOutput 类中的 setSampleBufferDelegate 方法从相机捕获帧。委托方法如下所示。您可以看到我转换为 UIImage 并将其放置在 UIImageView 中。我想将每个 UIImage 保存到磁盘并将 URL 存储在新的托管对象中,但我不知道如何正确获取托管对象上下文,因为每次调用都会使用串行调度队列生成一个新线程。任何人都可以建议一种使用 CoreData 和调度队列的解决方案,以便我可以构建存储在磁盘上并对应于托管对象的图像集合。
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly).
Same thing as for the CALayer we are not in the main thread so ...*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
/*We relase the CGImageRef*/
CGImageRelease(newImage);
[self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];
}
I am using an AVCaptureSession to capture frames from the camera using the setSampleBufferDelegate method from the AVCaptureVideoDataOutput class. The delegate method looks like the following. You can see that I convert to a UIImage and place it in a UIImageView. I would like to save each UIImage to disk and store the URL in a new managedObject but I don't know how to properly get a managedObjectContext since each call spawns a new thread using a serial dispatch queue. Can anyone suggest a solution to use CoreData and the dispatch queues in a way that I could build a collection of images that are stored on disk, and correspond to a managedObject.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly).
Same thing as for the CALayer we are not in the main thread so ...*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
/*We relase the CGImageRef*/
CGImageRelease(newImage);
[self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
推荐的解决方案是为每个线程创建一个新的 NSManagedObjectContext,每个线程都指向一个 NSPersistentStoreCoordinator。您可能还想监听 NSManagedObjectContextDidSaveNotification,以将更改合并到主线程的上下文中(使用恰当命名的
mergeChangesFromContextDidSaveNotification:
)。就我个人而言,我喜欢在中心位置使用这样的访问器来处理每个线程上下文:
请记住,在线程之间传递 NSManagedObjects 不能比传递上下文更容易。相反,您必须传递 NSManagedObjectID(来自对象的
objectID
属性),然后在目标线程中使用该线程上下文的objectWithID:
方法来获取等效对象。The recommended solution is to create a new NSManagedObjectContext for each thread, each pointing to a single NSPersistentStoreCoordinator. You may also want to listen for
NSManagedObjectContextDidSaveNotification
, to merge the changes into the main thread's context (using the aptly-namedmergeChangesFromContextDidSaveNotification:
).Personally, I like to use an accessor like this in a central place to handle the per-thread contexts:
Do remember that you cannot pass NSManagedObjects between threads any easier than you can pass contexts. Instead, you must pass an NSManagedObjectID (from the object's
objectID
property), and then in the destination thread use that thread's context'sobjectWithID:
method to get back an equivalent object.