将 UIImage 转换为 CVPixelBufferRef

发布于 2024-09-25 20:39:42 字数 104 浏览 2 评论 0原文

我想将 UIImage 对象转换为 CVPixelBufferRef 对象,但我完全不知道。我找不到任何执行此类操作的示例代码。

有人可以帮我吗?提前谢谢!

青亚

I want to convert a UIImage object to a CVPixelBufferRef object, but I have absolutly no idea. And I can't find any example code doing anything like this.

Can someone please help me? THX in advance!

C YA

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

看春风乍起 2024-10-02 20:39:42

有不同的方法可以做到这一点,这些函数将 CGImage 转换为像素缓冲区。 UImageCGImage 的包装器,因此要获取 CGImage,您只需调用方法 .CGImage
其他方法还包括从缓冲区(已发布)创建 CIImage 或使用 Accelerate 框架,这可能是最快但也是最难的。

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = @{
                              (NSString*)kCVPixelBufferCGImageCompatibilityKey : @YES,
                              (NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
                              };

    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
                        CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                        &pxbuffer);
    if (status!=kCVReturnSuccess) {
        NSLog(@"Operation failed");
    }
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
                                                 CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGAffineTransform flipVertical = CGAffineTransformMake( 1, 0, 0, -1, 0, CGImageGetHeight(image) );
    CGContextConcatCTM(context, flipVertical);
    CGAffineTransform flipHorizontal = CGAffineTransformMake( -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0 );
    CGContextConcatCTM(context, flipHorizontal);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    return pxbuffer;
}

There are different ways to do that, those functions convert a pixel buffer from a CGImage. UImage is a wrapper around CGImage, thus to get a CGImage you just need to call the method .CGImage.
The other ways are also create a CIImage from the buffer (already posted) or use the Accelerate framework, that is probably the fastest but also the hardest.

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = @{
                              (NSString*)kCVPixelBufferCGImageCompatibilityKey : @YES,
                              (NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
                              };

    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
                        CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                        &pxbuffer);
    if (status!=kCVReturnSuccess) {
        NSLog(@"Operation failed");
    }
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
                                                 CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGAffineTransform flipVertical = CGAffineTransformMake( 1, 0, 0, -1, 0, CGImageGetHeight(image) );
    CGContextConcatCTM(context, flipVertical);
    CGAffineTransform flipHorizontal = CGAffineTransformMake( -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0 );
    CGContextConcatCTM(context, flipHorizontal);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    return pxbuffer;
}
鹊巢 2024-10-02 20:39:42

您可以使用 Core Image 从 UIImage 创建 CVPixelBuffer。

// 1. Create a CIImage with the underlying CGImage encapsulated by the UIImage (referred to as 'image'):

CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];

// 2. Create a CIContext:

Context *ciContext = [CIContext contextWithCGContext:UIGraphicsGetCurrentContext() options:nil];

// 3. Render the CIImage to a CVPixelBuffer (referred to as 'outputBuffer'):

[self.ciContext render:img toCVPixelBuffer:outputBuffer];

AVFoundation 提供了读取视频文件(称为资产)以及从处理(或已读取)资产到像素缓冲区的其他 AVFoundation 对象的输出的类。如果这是您唯一关心的问题,您将在示例照片编辑扩展示例代码中找到所需的内容。

如果您的源是从一系列 UIImage 对象生成的(也许没有源文件,并且您正在从用户生成的内容创建一个新文件),那么上面提供的示例代码就足够了。

注意:这不是将 UIImage 转换为 CVPixelBuffer 的最有效方法,也不是唯一方法;但是,这是迄今为止最简单的方法。使用 Core Graphics 将 UIImage 转换为 CVPixelBuffer 需要更多代码来设置属性,例如像素缓冲区大小和色彩空间,而 Core Image 会为您处理这些属性。

You can use Core Image to create a CVPixelBuffer from a UIImage.

// 1. Create a CIImage with the underlying CGImage encapsulated by the UIImage (referred to as 'image'):

CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];

// 2. Create a CIContext:

Context *ciContext = [CIContext contextWithCGContext:UIGraphicsGetCurrentContext() options:nil];

// 3. Render the CIImage to a CVPixelBuffer (referred to as 'outputBuffer'):

[self.ciContext render:img toCVPixelBuffer:outputBuffer];

AVFoundation provides classes that read video files (called assets) and from the output of other AVFoundation objects that handle (or have already read) assets into pixel buffers. If that is your only concern, you'll find what you're looking for in the Sample Photo Editing Extension sample code.

If your source is generated from a series of UIImage objects (perhaps there was no source file, and you are creating a new file from user-generated content), then the sample code provided above will suffice.

NOTE: It is not the most efficient means nor the only means to convert a UIImage into a CVPixelBuffer; but, it is BY FAR the easiest means. Using Core Graphics to convert a UIImage into a CVPixelBuffer requires a lot more code to set up attributes, such as pixel buffer size and colorspace, which Core Image takes care of for you.

要走就滚别墨迹 2024-10-02 20:39:42

谷歌永远是你的朋友。搜索“CVPixelBufferRef”,第一个结果会导致 此代码段,来自 snipplr

- (CVPixelBufferRef)fastImageFromNSImage:(NSImage *)image{
CVPixelBufferRef buffer = NULL;

// config
size_t width = [image size].width;
size_t height = [image size].height;
size_t bitsPerComponent = 8; // *not* CGImageGetBitsPerComponent(image);
CGColorSpaceRef cs = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGBitmapInfo bi = kCGImageAlphaNoneSkipFirst; // *not* CGImageGetBitmapInfo(image);
NSDictionary *d = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];

// create pixel buffer
CVPixelBufferCreate(kCFAllocatorDefault, width, height, k32ARGBPixelFormat, (CFDictionaryRef)d, &buffer);
CVPixelBufferLockBaseAddress(buffer, 0);
void *rasterData = CVPixelBufferGetBaseAddress(buffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);

// context to draw in, set to pixel buffer's address
CGContextRef ctxt = CGBitmapContextCreate(rasterData, width, height, bitsPerComponent, bytesPerRow, cs, bi);
if(ctxt == NULL){
    NSLog(@"could not create context");
    return NULL;
}

// draw
NSGraphicsContext *nsctxt = [NSGraphicsContext graphicsContextWithGraphicsPort:ctxt flipped:NO];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:nsctxt];
[image compositeToPoint:NSMakePoint(0.0, 0.0) operation:NSCompositeCopy];
[NSGraphicsContext restoreGraphicsState];

CVPixelBufferUnlockBaseAddress(buffer, 0);
CFRelease(ctxt);

return buffer;
}

不过,不知道这是否有效。 (你的里程可能要警惕:)

Google is always your friend. Searching for "CVPixelBufferRef" the first result leads to this snippet from snipplr:

- (CVPixelBufferRef)fastImageFromNSImage:(NSImage *)image{
CVPixelBufferRef buffer = NULL;

// config
size_t width = [image size].width;
size_t height = [image size].height;
size_t bitsPerComponent = 8; // *not* CGImageGetBitsPerComponent(image);
CGColorSpaceRef cs = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGBitmapInfo bi = kCGImageAlphaNoneSkipFirst; // *not* CGImageGetBitmapInfo(image);
NSDictionary *d = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];

// create pixel buffer
CVPixelBufferCreate(kCFAllocatorDefault, width, height, k32ARGBPixelFormat, (CFDictionaryRef)d, &buffer);
CVPixelBufferLockBaseAddress(buffer, 0);
void *rasterData = CVPixelBufferGetBaseAddress(buffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);

// context to draw in, set to pixel buffer's address
CGContextRef ctxt = CGBitmapContextCreate(rasterData, width, height, bitsPerComponent, bytesPerRow, cs, bi);
if(ctxt == NULL){
    NSLog(@"could not create context");
    return NULL;
}

// draw
NSGraphicsContext *nsctxt = [NSGraphicsContext graphicsContextWithGraphicsPort:ctxt flipped:NO];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:nsctxt];
[image compositeToPoint:NSMakePoint(0.0, 0.0) operation:NSCompositeCopy];
[NSGraphicsContext restoreGraphicsState];

CVPixelBufferUnlockBaseAddress(buffer, 0);
CFRelease(ctxt);

return buffer;
}

No idea if this works at all, though. (Your mileage may wary :)

香草可樂 2024-10-02 20:39:42

虽然很晚了但是对于有需要的人来说。

 // call like this
    CVPixelBufferRef videobuffer = [self pixelBufferFromCGImage:yourImage.CGImage]; 

// 转换的方法

-(CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image{

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, videoSize.width, videoSize.height,
                                          kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options, &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, videoSize.width, videoSize.height, 8, 4*videoSize.width, rgbColorSpace, kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformIdentity);
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);
    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

Very late though but for someone who needs.

 // call like this
    CVPixelBufferRef videobuffer = [self pixelBufferFromCGImage:yourImage.CGImage]; 

// method that converts

-(CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image{

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, videoSize.width, videoSize.height,
                                          kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options, &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, videoSize.width, videoSize.height, 8, 4*videoSize.width, rgbColorSpace, kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformIdentity);
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);
    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}
莫多说 2024-10-02 20:39:42

CVPixelBufferRef 是核心视频用于相机输入的内容。

您可以使用 CGBitmapContextCreate 从图像创建类似的像素位图,然后将图像绘制到位图上下文中。

A CVPixelBufferRef is what core video uses for camera input.

You can create similar pixel bitmaps from images using CGBitmapContextCreate and then drawing the image into the bitmap context.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文