将 UIImage 转换为 CVPixelBufferRef
我想将 UIImage 对象转换为 CVPixelBufferRef 对象,但我完全不知道。我找不到任何执行此类操作的示例代码。
有人可以帮我吗?提前谢谢!
青亚
I want to convert a UIImage object to a CVPixelBufferRef object, but I have absolutly no idea. And I can't find any example code doing anything like this.
Can someone please help me? THX in advance!
C YA
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
有不同的方法可以做到这一点,这些函数将 CGImage 转换为像素缓冲区。
UImage
是CGImage
的包装器,因此要获取 CGImage,您只需调用方法.CGImage
。其他方法还包括从缓冲区(已发布)创建
CIImage
或使用Accelerate
框架,这可能是最快但也是最难的。There are different ways to do that, those functions convert a pixel buffer from a
CGImage
.UImage
is a wrapper aroundCGImage
, thus to get a CGImage you just need to call the method.CGImage
.The other ways are also create a
CIImage
from the buffer (already posted) or use theAccelerate
framework, that is probably the fastest but also the hardest.您可以使用 Core Image 从 UIImage 创建 CVPixelBuffer。
AVFoundation 提供了读取视频文件(称为资产)以及从处理(或已读取)资产到像素缓冲区的其他 AVFoundation 对象的输出的类。如果这是您唯一关心的问题,您将在示例照片编辑扩展示例代码中找到所需的内容。
如果您的源是从一系列 UIImage 对象生成的(也许没有源文件,并且您正在从用户生成的内容创建一个新文件),那么上面提供的示例代码就足够了。
注意:这不是将 UIImage 转换为 CVPixelBuffer 的最有效方法,也不是唯一方法;但是,这是迄今为止最简单的方法。使用 Core Graphics 将 UIImage 转换为 CVPixelBuffer 需要更多代码来设置属性,例如像素缓冲区大小和色彩空间,而 Core Image 会为您处理这些属性。
You can use Core Image to create a CVPixelBuffer from a UIImage.
AVFoundation provides classes that read video files (called assets) and from the output of other AVFoundation objects that handle (or have already read) assets into pixel buffers. If that is your only concern, you'll find what you're looking for in the Sample Photo Editing Extension sample code.
If your source is generated from a series of UIImage objects (perhaps there was no source file, and you are creating a new file from user-generated content), then the sample code provided above will suffice.
NOTE: It is not the most efficient means nor the only means to convert a UIImage into a CVPixelBuffer; but, it is BY FAR the easiest means. Using Core Graphics to convert a UIImage into a CVPixelBuffer requires a lot more code to set up attributes, such as pixel buffer size and colorspace, which Core Image takes care of for you.
谷歌永远是你的朋友。搜索“CVPixelBufferRef”,第一个结果会导致 此代码段,来自 snipplr:
不过,不知道这是否有效。 (你的里程可能要警惕:)
Google is always your friend. Searching for "CVPixelBufferRef" the first result leads to this snippet from snipplr:
No idea if this works at all, though. (Your mileage may wary :)
虽然很晚了但是对于有需要的人来说。
// 转换的方法
Very late though but for someone who needs.
// method that converts
CVPixelBufferRef 是核心视频用于相机输入的内容。
您可以使用 CGBitmapContextCreate 从图像创建类似的像素位图,然后将图像绘制到位图上下文中。
A CVPixelBufferRef is what core video uses for camera input.
You can create similar pixel bitmaps from images using CGBitmapContextCreate and then drawing the image into the bitmap context.