如何将 CVImageBufferRef 转换为 UIImage

发布于 2024-09-07 07:51:30 字数 1923 浏览 2 评论 0原文

我正在尝试从相机捕获视频。我已经获得了要触发的 captureOutput:didOutputSampleBuffer: 回调,它为我提供了一个示例缓冲区,然后我将其转换为 CVImageBufferRef。然后,我尝试将该图像转换为 UIImage,然后可以在我的应用程序中查看。

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    /*Lock the image buffer*/
    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    /*Get information about the image*/
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    /*We unlock the  image buffer*/
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

    /*We release some components*/
    CGContextRelease(newContext); 
     CGColorSpaceRelease(colorSpace);

     /*We display the result on the custom layer*/
    /*self.customLayer.contents = (id) newImage;*/

    /*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
    self.capturedView.image = image;

    /*We relase the CGImageRef*/
    CGImageRelease(newImage);
}

在调用 CGBitmapContextCreate 之前,代码似乎工作正常。它总是返回一个NULL指针。因此,该函数的其余部分都不起作用。无论我似乎传递什么,该函数都会返回 null。我不知道为什么。

I am trying to capture video from a camera. i have gotten the captureOutput:didOutputSampleBuffer: callback to trigger and it gives me a sample buffer that i then convert to a CVImageBufferRef. i then attempt to convert that image to a UIImage that i can then view in my app.

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    /*Lock the image buffer*/
    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    /*Get information about the image*/
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    /*We unlock the  image buffer*/
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

    /*We release some components*/
    CGContextRelease(newContext); 
     CGColorSpaceRelease(colorSpace);

     /*We display the result on the custom layer*/
    /*self.customLayer.contents = (id) newImage;*/

    /*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
    self.capturedView.image = image;

    /*We relase the CGImageRef*/
    CGImageRelease(newImage);
}

the code seems to work fine up until the call to CGBitmapContextCreate. it always returns a NULL pointer. so consequently none of the rest of the function works. no matter what i seem to pass it the function returns null. i have no idea why.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

丑丑阿 2024-09-14 07:51:30

如果您需要将 CVImageBufferRef 转换为 UIImage,不幸的是,这似乎比应有的要困难得多。

本质上,您需要首先将其转换为 CIImage,然后是 CGImage,最后是 UIImage。我希望我能告诉你原因。 :)

-(void) screenshotOfVideoStream:(CVImageBufferRef)imageBuffer
{
    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
    CIContext *temporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef videoImage = [temporaryContext
                                 createCGImage:ciImage
                                 fromRect:CGRectMake(0, 0,
                                 CVPixelBufferGetWidth(imageBuffer),
                                 CVPixelBufferGetHeight(imageBuffer))];

    UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
    [self doSomethingWithOurUIImage:image];
    CGImageRelease(videoImage);
}

当我使用 VTDecompressionSession 回调转换 H.264 视频以获取 CVImageBufferRef 时,这种特殊方法对我有用(但它应该适用于任何 CVImageBufferRef< /代码>)。我使用的是 iOS 8.1、XCode 6.2。

If you need to convert a CVImageBufferRef to UIImage, it seems to be much more difficult than it should be unfortunately.

Essentially you need to first convert it to CIImage, then CGImage, and then finally UIImage. I wish I could tell you why. :)

-(void) screenshotOfVideoStream:(CVImageBufferRef)imageBuffer
{
    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
    CIContext *temporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef videoImage = [temporaryContext
                                 createCGImage:ciImage
                                 fromRect:CGRectMake(0, 0,
                                 CVPixelBufferGetWidth(imageBuffer),
                                 CVPixelBufferGetHeight(imageBuffer))];

    UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
    [self doSomethingWithOurUIImage:image];
    CGImageRelease(videoImage);
}

This particular method worked for me when I was converting H.264 video using the VTDecompressionSession callback to get the CVImageBufferRef (but it should work for any CVImageBufferRef). I was using iOS 8.1, XCode 6.2.

后知后觉 2024-09-14 07:51:30

您传递 baseAddress 的方式假定图像数据采用

ACCC

形式(其中 C 是某种颜色分量, R || G || B )。

如果您已将 AVCaptureSession 设置为以本机格式捕获视频帧,则很可能会以平面 YUV420 格式获取视频数据。 (请参阅:链接文本 )为了执行您在此处尝试执行的操作,可能最简单的方法是指定您希望在 kCVPixelFormatType_32RGBA 中捕获视频帧。如果您以非平面格式捕获视频帧,Apple 建议您以 kCVPixelFormatType_32BGRA 捕获视频帧,其原因未说明,但我可以合理地假设是出于性能考虑。

警告:我还没有这样做,并且假设像这样访问 CVPixelBufferRef 内容是构建图像的合理方法。我不能保证这实际上有效,但我/可以/告诉你,由于你(可能)捕获视频帧的像素格式,你现在做事的方式可靠地不会工作。

The way that you are passing on the baseAddress presumes that the image data is in the form

ACCC

( where C is some color component, R || G || B ).

If you've set up your AVCaptureSession to capture the video frames in native format, more than likely you're getting the video data back in planar YUV420 format. (see: link text ) In order to do what you're attempting to do here, probably the easiest thing to do would be specify that you want the video frames captured in kCVPixelFormatType_32RGBA . Apple recommends that you capture the video frames in kCVPixelFormatType_32BGRA if you capture it in non-planar format at all, the reasoning for which is not stated, but I can reasonably assume is due to performance considerations.

Caveat: I've not done this, and am assuming that accessing the CVPixelBufferRef contents like this is a reasonable way to build the image. I can't vouch for this actually working, but I /can/ tell you that the way you are doing things right now reliably will not work due to the pixel format that you are (probably) capturing the video frames as.

千寻… 2024-09-14 07:51:30

您可以直接拨打:

self.yourImageView.image=[[UIImage alloc] initWithCIImage:[CIImage imageWithCVPixelBuffer:imageBuffer]];

You can directly call:

self.yourImageView.image=[[UIImage alloc] initWithCIImage:[CIImage imageWithCVPixelBuffer:imageBuffer]];
忆梦 2024-09-14 07:51:30

Benjamin Loulier 写了一篇关于在考虑多种方法的速度的情况下输出 CVImageBufferRef 的非常好的文章。

您还可以在 github 上找到一个工作示例;)

回到过去怎么样? ;)
在这里:http://web.archive.org/web/20140426162537/http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera

Benjamin Loulier wrote a really good post on outputting a CVImageBufferRef under the consideration of speed with multiple approaches.

You can also find a working example on github ;)

How about back in time? ;)
Here you go: http://web.archive.org/web/20140426162537/http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文