缩放后,CIIMAGE CVPIXELBUFFER为零
我正在使用avvideopomposition
api从本地视频中获取ciimage
s,并且在缩放ciimage
之后,我得到了> nil
尝试获取cvpixelbuffer
。
在缩放源框架之前,我要获得原始帧cvpixelbuffer
。
有什么原因是缓冲区 nil 在缩放?
样本:
AVVideoComposition(asset: asset) { [weak self] request in
let source = request.sourceImage
let pixelBuffer = source.pixelBuffer // return value
let scaledDown = source.transformed(by: .init(scaleX: 0.5, y: 0.5))
let scaledPixelBuffer // return nil
})
I'm using the AVVideoComposition
API to get CIImage
s from a local video, and after scaling down the CIImage
I'm getting nil
when trying to get the CVPixelBuffer
.
Before scaling down the source frame, I'm getting the original frame CVPixelBuffer
.
Is there any reason the buffer is nil
after scaling down?
Sample:
AVVideoComposition(asset: asset) { [weak self] request in
let source = request.sourceImage
let pixelBuffer = source.pixelBuffer // return value
let scaledDown = source.transformed(by: .init(scaleX: 0.5, y: 0.5))
let scaledPixelBuffer // return nil
})
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我认为您的样本中的最后一行是不完整的。您的意思是
让ScaeledPixelBuffer = ScaledDown.pixelBuffer
吗?如果是这样,则是的,这将无法正常工作。原因是pixelbuffer
属性仅当直接从cvpixelbuffer
直接创建ciimage
时才可用。从文档中:传递给组成块的
ciimage
是由Avoundation提供的像素缓冲区创建的。但是,当您应用过滤器或对其进行转换时,需要使用cicontext
将结果图像明确地渲染到像素缓冲区中,否则您将不会得到结果。如果要更改构图所使用的视频框架的大小,则可以改用
avmutableVideOpomposition
,然后将其RenderSize
设置为您所需的大小,然后才能初始化。I think the last line in your sample is incomplete. Did you mean
let scaledPixelBuffer = scaledDown.pixelBuffer
? If so, then yes, this won't work. The reason is that thepixelBuffer
property is only available if theCIImage
was created directly from aCVPixelBuffer
. From the docs:The
CIImage
that is passed to the composition block was created from a pixel buffer provided by AVFoundation. But when you apply a filter or transform to it, you need to render the resulting image into a pixel buffer explicitly using aCIContext
, otherwise you won't get a result.If you want to change the size of the video frames the composition is using, you can use a
AVMutableVideoComposition
instead and set itsrenderSize
to your desired size after it is initialized: