缩放后,CIIMAGE CVPIXELBUFFER为零

发布于 2025-02-02 13:54:24 字数 586 浏览 2 评论 0原文

我正在使用avvideopomposition api从本地视频中获取ciimage s,并且在缩放ciimage之后,我得到了> nil尝试获取cvpixelbuffer
在缩放源框架之前,我要获得原始帧cvpixelbuffer
有什么原因是缓冲区 nil 在缩放?

样本:

   AVVideoComposition(asset: asset) { [weak self] request in
        let source = request.sourceImage
        let pixelBuffer = source.pixelBuffer // return value
        let scaledDown = source.transformed(by: .init(scaleX: 0.5, y: 0.5))
        let scaledPixelBuffer // return nil
   })

I'm using the AVVideoComposition API to get CIImages from a local video, and after scaling down the CIImage I'm getting nil when trying to get the CVPixelBuffer.
Before scaling down the source frame, I'm getting the original frame CVPixelBuffer.
Is there any reason the buffer is nil after scaling down?

Sample:

   AVVideoComposition(asset: asset) { [weak self] request in
        let source = request.sourceImage
        let pixelBuffer = source.pixelBuffer // return value
        let scaledDown = source.transformed(by: .init(scaleX: 0.5, y: 0.5))
        let scaledPixelBuffer // return nil
   })

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

土豪我们做朋友吧 2025-02-09 13:54:24

我认为您的样本中的最后一行是不完整的。您的意思是让ScaeledPixelBuffer = ScaledDown.pixelBuffer吗?如果是这样,则是的,这将无法正常工作。原因是pixelbuffer属性仅当直接从cvpixelbuffer直接创建ciimage时才可用。从文档中:

如果使用init(cvpixelbuffer:) initializer创建此图像,则该属性的值是cvpixelbuffer对象,可提供图像的基础图像数据。 […]否则,此属性的值是nil

传递给组成块的ciimage是由Avoundation提供的像素缓冲区创建的。但是,当您应用过滤器或对其进行转换时,需要使用cicontext将结果图像明确地渲染到像素缓冲区中,否则您将不会得到结果。

如果要更改构图所使用的视频框架的大小,则可以改用avmutableVideOpomposition,然后将其RenderSize设置为您所需的大小,然后才能初始化。

let composition = AVMutableVideoComposition(asset: asset) { … }
composition.renderSize = CGSize(width: 1280, height: 720)

I think the last line in your sample is incomplete. Did you mean let scaledPixelBuffer = scaledDown.pixelBuffer? If so, then yes, this won't work. The reason is that the pixelBuffer property is only available if the CIImage was created directly from a CVPixelBuffer. From the docs:

If this image was create using the init(cvPixelBuffer:) initializer, this property’s value is the CVPixelBuffer object that provides the image’s underlying image data. […] Otherwise, this property’s value is nil.

The CIImage that is passed to the composition block was created from a pixel buffer provided by AVFoundation. But when you apply a filter or transform to it, you need to render the resulting image into a pixel buffer explicitly using a CIContext, otherwise you won't get a result.

If you want to change the size of the video frames the composition is using, you can use a AVMutableVideoComposition instead and set its renderSize to your desired size after it is initialized:

let composition = AVMutableVideoComposition(asset: asset) { … }
composition.renderSize = CGSize(width: 1280, height: 720)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文