UIImagePickerController:从视频中获取图像

发布于 2024-10-17 18:28:51 字数 643 浏览 2 评论 0原文

我正在尝试从视频中提取一帧作为图像。视频是使用 UIImagePickerController 录制的。

录制视频后,我获取其 URL 并使用 AVURLAsset 加载它。然后我创建一个 AVAssetReader 和 AVAssetReaderTrackOutput 来获取各个帧。

当我获取 CMSampleBufferRef 的帧时,我将它们传递给 Apple 的 imageFromSampleBuffer 方法,该方法应该返回 UIImage。当我使用 AVCaptureSession 获取帧时,此方法工作得很好,但是当我使用通过 UIImagePickerController 录制的视频时,此行返回 0x0:

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

其中,sampleBuffer 是我传递的 CMSampleBufferRed。

我通过调试器检查了sampleBuffer的值,它看起来没问题(不是0x0)。 CMSampleBufferGetImageBuffer 是否有任何原因返回 0x0?

或者是否有另一种方法可以从 MOV 文件中提取单个视频帧并将其另存为 UIImage?

谢谢。

I'm trying to extract a frame from a video as an image. The video is recorded using UIImagePickerController.

When the video has been recorded I get its URL and load it using AVURLAsset. Then I create a AVAssetReader and AVAssetReaderTrackOutput to get the individual frames.

When I get the frames as CMSampleBufferRef's I pass them to Apple's imageFromSampleBuffer method which should return a UIImage. This method worked fine when I was getting frames using an AVCaptureSession but when I use a video recorded via UIImagePickerController then this line returns 0x0:

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

Where sampleBuffer is the CMSampleBufferRed that I pass.

I checked the value of sampleBuffer via the debugger and it looked ok (wasn't 0x0). Is there any reason why CMSampleBufferGetImageBuffer would return 0x0?

Or alternatively is there another way to extract a single video frame from a MOV file and save it as a UIImage?

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

战皆罪 2024-10-24 18:28:51

我找到了解决方案。您必须在初始化 AVAssetReaderTrackOutput 时传递的设置中设置像素格式。

我通过了以下设置字典:

NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 

希望这对其他人有帮助。

I found the solution. You have to set the pixel format in the settings passed when you initialize an AVAssetReaderTrackOutput.

I passed the following settings dictionary:

NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 

Hope this helps someone else.

晨曦÷微暖 2024-10-24 18:28:51

前段时间我也是这样做的。

    func getPicture() -> UIImage {



        let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!

        var videoWidth = CVPixelBufferGetWidth(imageBuffer)
        var videoHeight = CVPixelBufferGetHeight(imageBuffer)



        CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))

        // Creating an image from a imageBuffer, a frame.
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
        let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) 
        let colorSpace  = CGColorSpaceCreateDeviceRGB()  
        var bitmapInfo  = CGBitmapInfo.byteOrder32Little.rawValue
        bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue 


        let context = CGContext(data: baseAddress, width: videoWidth, height: videoHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)!

        let frameImage =   context.makeImage()!  // This is a CGImage
        let image =  UIImage(cgImage: frameImage) 

        return image

    }

I did it like this some time ago.

    func getPicture() -> UIImage {



        let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!

        var videoWidth = CVPixelBufferGetWidth(imageBuffer)
        var videoHeight = CVPixelBufferGetHeight(imageBuffer)



        CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))

        // Creating an image from a imageBuffer, a frame.
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
        let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) 
        let colorSpace  = CGColorSpaceCreateDeviceRGB()  
        var bitmapInfo  = CGBitmapInfo.byteOrder32Little.rawValue
        bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue 


        let context = CGContext(data: baseAddress, width: videoWidth, height: videoHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)!

        let frameImage =   context.makeImage()!  // This is a CGImage
        let image =  UIImage(cgImage: frameImage) 

        return image

    }
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文