iPhone RGBA 转 ARGB

发布于 2024-09-27 14:25:30 字数 1937 浏览 6 评论 0原文

我使用 glReadPixels 抓取 opengl 场景的屏幕截图,然后使用 IOS 4 上的 AVAssetWriter 将它们转换为视频。我的问题是我需要将 alpha 通道传递给视频,该视频仅接受 kCVPixelFormatType_32ARGB 和 glReadPixels 检索 RGBA。所以基本上我需要一种将 RGBA 转换为 ARGB 的方法,换句话说,将 alpha 字节放在第一位。

int depth = 4;
unsigned char buffer[width * height * depth];  
glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);

CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer), width*height*depth, NULL );

CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;

CGImageRef image = CGImageCreate(width, height, 8, 32, width*depth, CGColorSpaceCreateDeviceRGB(), bitmapInfo, ref, NULL, true, kCGRenderingIntentDefault);

UIWindow* parentWindow = [self window];

NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];

CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);

NSParameterAssert(status == kCVReturnSuccess);
NSParameterAssert(pxbuffer != NULL);

CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, depth*width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);

NSParameterAssert(context);

CGContextConcatCTM(context, parentWindow.transform);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);

CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

return pxbuffer; // chuck pixel buffer into AVAssetWriter

我想我会发布整个代码,因为我可以帮助别人。

干杯

Im using glReadPixels to grab screen shots of my opengl scene and then turning them into a video using AVAssetWriter on IOS 4. My problem is i need to pass the alpha channel to the video which only accepts kCVPixelFormatType_32ARGB and glReadPixels on retrieves RGBA. So basically i need a way to convert my RGBA to ARGB, in other words put the alpha bytes first.

int depth = 4;
unsigned char buffer[width * height * depth];  
glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);

CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer), width*height*depth, NULL );

CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;

CGImageRef image = CGImageCreate(width, height, 8, 32, width*depth, CGColorSpaceCreateDeviceRGB(), bitmapInfo, ref, NULL, true, kCGRenderingIntentDefault);

UIWindow* parentWindow = [self window];

NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];

CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);

NSParameterAssert(status == kCVReturnSuccess);
NSParameterAssert(pxbuffer != NULL);

CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, depth*width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);

NSParameterAssert(context);

CGContextConcatCTM(context, parentWindow.transform);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);

CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

return pxbuffer; // chuck pixel buffer into AVAssetWriter

Thought i would post the whole code as i may help someone else.

Cheers

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

思慕 2024-10-04 14:25:30

注意:我假设每个通道 8 位。如果情况并非如此,请进行相应调整。

要最后移动 alpha 位,您需要执行旋转。这通常最容易通过位移位来表达。

在本例中,您希望将 RGB 位右移 8 位,将 A 位左移 24 位。然后应使用按位或将这两个值放在一起,以便成为 argb = (rgba >> 8) | (rgba << 24)。

Note: I'm assuming 8 bits per channel. Adjust accordingly if this is not the case.

To move the alpha bits last, you need to perform rotation. This is usually expressed most easily through bit shifting.

In this case, you want to move the RGB bits 8 bits right, and the A bits 24 bits left. These two values should then be put together using bitwise OR, so that becomes argb = (rgba >> 8) | (rgba << 24).

爱给你人给你 2024-10-04 14:25:30

更好的是,不要使用 ARGB 编码视频,而是发送 AVAssetWriter BGRA 帧。正如我在此答案中所述,这样做可以让您在 iPhone 4 上以 30 FPS 编码 640x480 视频,最多可以在 20 720p 视频的 FPS。使用此功能,iPhone 4S 可以一直播放 30 FPS 的 1080p 视频。

另外,您需要确保使用像素缓冲池,而不是每次都重新创建像素缓冲区。复制该答案中的代码,您可以使用以下代码配置 AVAssetWriter:

NSError *error = nil;

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@", error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                       [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                       [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                       nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

然后使用此代码使用 glReadPixels() 抓取每个渲染帧:

CVPixelBufferRef pixel_buffer = NULL;

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}

// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);

CVPixelBufferRelease(pixel_buffer);

当使用 glReadPixels() 时,您需要调配帧的颜色,因此我使用了一个离屏 FBO 和一个片段着色器,代码如下:

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 void main()
 {
     gl_FragColor = texture2D(inputImageTexture, textureCoordinate).bgra;
 }

但是,在 iOS 5.0 上有比 glReadPixels( 更快的途径来获取 OpenGL ES 内容) ),我在这个答案中进行了描述。该过程的好处在于,纹理已经以 BGRA 像素格式存储内容,因此您只需将封装像素缓冲区直接提供给 AVAssetWriter,无需任何颜色转换,并且仍然可以看到出色的编码速度。

Even better, don't encode your video using ARGB, send your AVAssetWriter BGRA frames. As I describe in this answer, doing so lets you encode 640x480 video at 30 FPS on an iPhone 4, and up to 20 FPS for 720p video. An iPhone 4S can go all the way up to 1080p video at 30 FPS using this.

Also, you'll want to make sure you use a pixel buffer pool instead of recreating a pixel buffer each time. Copying the code from that answer, you configure the AVAssetWriter using this:

NSError *error = nil;

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@", error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                       [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                       [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                       nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

then use this code to grab each rendered frame using glReadPixels():

CVPixelBufferRef pixel_buffer = NULL;

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}

// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);

CVPixelBufferRelease(pixel_buffer);

When using glReadPixels(), you need to swizzle the colors of your frame, so I've employed an offscreen FBO and a fragment shader with the following code to do this:

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 void main()
 {
     gl_FragColor = texture2D(inputImageTexture, textureCoordinate).bgra;
 }

However, there is an even faster route on iOS 5.0 to grab OpenGL ES content than glReadPixels(), which I describe in this answer. The nice thing about that process is that the textures already store content in BGRA pixel format, so you can just feed the encapsulating pixel buffers right to an AVAssetWriter without any color conversion and still see great encoding speeds.

超可爱的懒熊 2024-10-04 14:25:30

我意识到这个问题已经得到解答,但我想确保人们了解 vImage,它是 Accelerate 框架的一部分,可在 iOS 和 OSX 中使用。我的理解是,Core Graphics 使用 vImage 对位图进行 CPU 密集型矢量操作。

您想要将 ARGB 转换为 RGBA 的特定 API 是 vImagePermuteChannels_ARGB8888。还有一些 API 可将 RGB 转换为 ARGB/XRGB、翻转图像、覆盖通道等等。这是一颗隐藏的宝石!

更新:布拉德·拉森 (Brad Larson) 在此处对本质上相同的问题写了一个很好的答案。

I realize this question has been answered, but I wanted to make sure folks are aware of vImage, part of the Accelerate framework and available in iOS and OSX. My understanding is that vImage is used by Core Graphics to do CPU-bound vector operations on bitmaps.

The specific API you want for to convert ARGB to RGBA is vImagePermuteChannels_ARGB8888. There are also APIs to convert RGB to ARGB/XRGB, to flip an image, to overwrite a channel, and much more. It's kind of a hidden gem!

Update: Brad Larson wrote a great answer to essentially the same question here.

我早已燃尽 2024-10-04 14:25:30

是的,每个通道 8 位,所以是这样的:

int depth = 4;
int width = 320;
int height = 480;

unsigned char buffer[width * height * depth]; 

glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);

for(int i = 0; i < width; i++){
   for(int j = 0; j < height; j++){     
    buffer[i*j] = (buffer[i*j] >> 8) | (buffer[i*j] << 24);
    }
}

我似乎无法让它工作

Yep its 8 bits per channel, so is it something like:

int depth = 4;
int width = 320;
int height = 480;

unsigned char buffer[width * height * depth]; 

glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);

for(int i = 0; i < width; i++){
   for(int j = 0; j < height; j++){     
    buffer[i*j] = (buffer[i*j] >> 8) | (buffer[i*j] << 24);
    }
}

I cant seem to get it working

但可醉心 2024-10-04 14:25:30

我确信 alpha 值可以被忽略。因此,您只需将像素缓冲区数组移动一个字节即可执行memcpy:

void *buffer = malloc(width*height*4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
…
memcpy(pxdata + 1, buffer, width*height*4 - 1);

I'm sure that the alpha-values can be ignored. So you can just do memcpy with the pixel-buffer array shifted by one byte:

void *buffer = malloc(width*height*4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
…
memcpy(pxdata + 1, buffer, width*height*4 - 1);
策马西风 2024-10-04 14:25:30
+ (UIImage *) createARGBImageFromRGBAImage: (UIImage *)image {
    CGSize dimensions = [image size];

    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * dimensions.width;
    NSUInteger bitsPerComponent = 8;

    unsigned char *rgba = malloc(bytesPerPixel * dimensions.width * dimensions.height);
    unsigned char *argb = malloc(bytesPerPixel * dimensions.width * dimensions.height);

    CGColorSpaceRef colorSpace = NULL;
    CGContextRef context = NULL;

    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate(rgba, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
    CGContextDrawImage(context, CGRectMake(0, 0, dimensions.width, dimensions.height), [image CGImage]);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    for (int x = 0; x < dimensions.width; x++) {
        for (int y = 0; y < dimensions.height; y++) {
            NSUInteger offset = ((dimensions.width * y) + x) * bytesPerPixel;
            argb[offset + 0] = rgba[offset + 3];
            argb[offset + 1] = rgba[offset + 0];
            argb[offset + 2] = rgba[offset + 1];
            argb[offset + 3] = rgba[offset + 2];
        }
    }

    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate(argb, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    image = [UIImage imageWithCGImage: imageRef];
    CGImageRelease(imageRef);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    free(rgba);
    free(argb);

    return image;
}
+ (UIImage *) createARGBImageFromRGBAImage: (UIImage *)image {
    CGSize dimensions = [image size];

    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * dimensions.width;
    NSUInteger bitsPerComponent = 8;

    unsigned char *rgba = malloc(bytesPerPixel * dimensions.width * dimensions.height);
    unsigned char *argb = malloc(bytesPerPixel * dimensions.width * dimensions.height);

    CGColorSpaceRef colorSpace = NULL;
    CGContextRef context = NULL;

    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate(rgba, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
    CGContextDrawImage(context, CGRectMake(0, 0, dimensions.width, dimensions.height), [image CGImage]);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    for (int x = 0; x < dimensions.width; x++) {
        for (int y = 0; y < dimensions.height; y++) {
            NSUInteger offset = ((dimensions.width * y) + x) * bytesPerPixel;
            argb[offset + 0] = rgba[offset + 3];
            argb[offset + 1] = rgba[offset + 0];
            argb[offset + 2] = rgba[offset + 1];
            argb[offset + 3] = rgba[offset + 2];
        }
    }

    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate(argb, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    image = [UIImage imageWithCGImage: imageRef];
    CGImageRelease(imageRef);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    free(rgba);
    free(argb);

    return image;
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文