iOS下逐帧读取视频

发布于 2024-11-25 16:16:34 字数 789 浏览 2 评论 0原文

我正在寻找一种使用 iOS API 检索视频各个帧的方法。 我尝试使用 AVAssetImageGenerator 但它似乎只提供最接近秒的帧,这对我的使用来说有点太粗糙了。

根据我对文档的理解,AVAssetReader、AVAssetReaderOutput 和 CMSampleBufferGetImageBuffer 的管道我应该能够做一些事情,但我被 CVImageBufferRef 困住了。有了这个,我正在寻找一种获取 CGImageRef 或 UIImage 的方法,但还没有找到。

不需要实时,我越能坚持提供的 API 就越好。

多谢!

编辑: 基于此网站:http://www.7twenty7。 com/blog/2010/11/video-processing-with-av-foundation 和这个问题:如何将 CVImageBufferRef 转换为 UIImage 我即将找到解决方案。问题,AVAssetReader 在第一个 copyNextSampleBuffer 后停止读取,没有给我任何东西(sampleBuffer 为 NULL)。

该视频可由 MPMoviePlayerController 读取。我不明白出了什么问题。

I'm looking for a way to retrieve the individual frames of a video using iOS API.
I tried using AVAssetImageGenerator but it seems to only provide frame to the nearest second which is a bit too rough for my usage.

From what I understand of the documentation, a pipeline of AVAssetReader, AVAssetReaderOutput and CMSampleBufferGetImageBuffer I should be able to do something but I'm stuck with a CVImageBufferRef. With this I'm looking for a way to get a CGImageRef or a UIImage but haven't found it.

Real-time is not needed and the more I can stick to provided API the better.

Thanks a lot!

Edit:
Based on this site: http://www.7twenty7.com/blog/2010/11/video-processing-with-av-foundation and this question: how to convert a CVImageBufferRef to UIImage I'm nearing on a solution. Problem, the AVAssetReader stops reading after the first copyNextSampleBuffer without giving me anything (the sampleBuffer is NULL).

The video is readable by MPMoviePlayerController. I don't understand what's wrong.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

小矜持 2024-12-02 16:16:34

上面的两个链接实际上回答了我的问题,空的 copyNextBufferSample 是 iOS SDK 5.0b3 的问题,它可以在设备上运行。

The two links above actually answer my question and the empty copyNextBufferSample is an issue with iOS SDK 5.0b3, it works on the device.

眼眸里的快感 2024-12-02 16:16:34

AVAssetImageGenerator 对于抓取的确切帧时间具有非常宽松的默认容差。它有两个确定容差的属性:requestedTimeToleranceBeforerequestedTimeToleranceAfter。这些容差默认为 kCMTimePositiveInfinity,因此如果您想要精确的时间,请将它们设置为 kCMTimeZero 以获得精确的帧。

(抓取精确帧可能比抓取近似帧需要更长的时间,但您声明实时不是问题。)

AVAssetImageGenerator has very loose default tolerances for the exact frame time that is grabbed. It has two properties that determine the tolerance: requestedTimeToleranceBefore and requestedTimeToleranceAfter. These tolerances default to kCMTimePositiveInfinity, so if you want exact times, set them to kCMTimeZero to get exact frames.

(It may take longer to grab the exact frames than approximate frames, but you state that realtime is not an issue.)

笨死的猪 2024-12-02 16:16:34

Use AVReaderWriter. Though its an OS X Apple sample code, AVFoundation is available on both platforms with little changes.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文