来自 CALayer 的 OpenGL 纹理 (AVPlayerLayer)

发布于 2024-11-03 07:05:29 字数 5199 浏览 0 评论 0原文

我有一个 AVPlayerLayer,我想用它创建一个 OpenGL 纹理。我对 opengl 纹理很满意,甚至对将 CGImageRef 转换为 opengl 纹理也很满意。在我看来,下面的代码应该可以工作,但我得到的只是纯黑色。我做错了什么?我需要先在 CALayer / AVPlayerLayer 上设置任何属性吗?

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

int width = (int)[layer bounds].size.width;
int height = (int)[layer bounds].size.height;

CGContextRef context = CGBitmapContextCreate(NULL,
                                 width,
                                 height,
                                 8,
                                 width * 4,
                                 colorSpace,
                                 kCGImageAlphaPremultipliedLast);

CGColorSpaceRelease(colorSpace);

if (context== NULL) {
    ofLog(OF_LOG_ERROR, "getTextureFromLayer: failed to create context 1");
    return;
}

[[layer presentationLayer] renderInContext:context];

CGImageRef cgImage = CGBitmapContextCreateImage(context);

int bytesPerPixel   = CGImageGetBitsPerPixel(cgImage)/8;
if(bytesPerPixel == 3) bytesPerPixel = 4;

GLubyte *pixels     = (GLubyte *) malloc(width * height * bytesPerPixel);

CGContextRelease(context);
context = CGBitmapContextCreate(pixels,
                                width,
                                height,
                                CGImageGetBitsPerComponent(cgImage),
                                width * bytesPerPixel,
                                CGImageGetColorSpace(cgImage),
                                kCGImageAlphaPremultipliedLast);

if(context == NULL) {
    ofLog(OF_LOG_ERROR, "getTextureFromLayer: failed to create context 2");
    free(pixels);
    return;
}

CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), cgImage);

int glMode;
switch(bytesPerPixel) {
    case 1:
        glMode = GL_LUMINANCE;
        break;
    case 3: 
        glMode = GL_RGB;
        break;
    case 4: 
    default:
        glMode = GL_RGBA; break;
}

if(texture.bAllocated() == false || texture.getWidth() != width || texture.getHeight() != height) {
    NSLog(@"getTextureFromLayer: allocating texture %i, %i\n", width, height);
    texture.allocate(width, height, glMode, true);
}

// test texture
//        for(int i=0; i<width*height*4; i++) pixels[i] = ofRandomuf() * 255;

texture.loadData(pixels, width, height, glMode);

CGContextRelease(context);
CFRelease(cgImage);
free(pixels);

PS 变量“纹理”是一个 C++ opengl(-es 兼容)纹理对象,我知道它可以工作。如果我取消注释“测试纹理”循环,用随机噪声填充纹理,我可以看到这一点,所以问题肯定是之前的。

更新

为了响应 Nick Weaver 的回复,我尝试了一种不同的方法,现在我总是从 copyNextSampleBuffer 返回 NULL,状态 == 3 (AVAssetReaderStatusFailed)。我错过了什么吗?

变量

AVPlayer                *videoPlayer;
AVPlayerLayer           *videoLayer;
AVAssetReader           *videoReader;
AVAssetReaderTrackOutput*videoOutput;

初始化

    videoPlayer = [[AVPlayer alloc] initWithURL:[NSURL fileURLWithPath:[NSString stringWithUTF8String:videoPath.c_str()]]];

    if(videoPlayer == nil) {
        NSLog(@"videoPlayer == nil ERROR LOADING %s\n", videoPath.c_str());
    } else {
        NSLog(@"videoPlayer: %@", videoPlayer);
        videoLayer = [[AVPlayerLayer playerLayerWithPlayer:videoPlayer] retain];
        videoLayer.frame = [ThreeDView instance].bounds;
        // [[ThreeDView instance].layer addSublayer:videoLayer]; // test to see if it's loading and running

        AVAsset *asset = videoPlayer.currentItem.asset;
        NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
        NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], (NSString*)kCVPixelBufferPixelFormatTypeKey, nil];

        videoReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
        videoOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:[tracks objectAtIndex:0] outputSettings:settings];
        [videoReader addOutput:videoOutput];
        [videoReader startReading];
    }

绘制循环

    if(videoPlayer == 0) {
        ofLog(OF_LOG_WARNING, "Shot::drawVideo: videoPlayer == 0");
        return;
    }


    if(videoOutput == 0) {
        ofLog(OF_LOG_WARNING, "Shot::drawVideo: videoOutput == 0");
        return;
    }

    CMSampleBufferRef sampleBuffer = [videoOutput copyNextSampleBuffer];

    if(sampleBuffer == 0) {
        ofLog(OF_LOG_ERROR, "Shot::drawVideo: sampleBuffer == 0, status: %i", videoReader.status);
        return;
    }


    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CFRelease(sampleBuffer);

    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    unsigned char *pixels = ( unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer); 

    int width = CVPixelBufferGetWidth(imageBuffer); 
    int height = CVPixelBufferGetHeight(imageBuffer);

    if(videoTexture.bAllocated() == false || videoTexture.getWidth() != width || videoTexture.getHeight() != height) {
        NSLog(@"Shot::drawVideo() allocating texture %i, %i\n", width, height);
        videoTexture.allocate(width, height, GL_RGBA, true);
    }

    videoTexture.loadData(pixels, width, height, GL_BGRA);

    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

I have an AVPlayerLayer which I would like to create an OpenGL Texture out of. I'm comfortable with opengl textures, and even comfortable with converting a CGImageRef into an opengl texture. It seems to me the code below should work, but I get just plain black. What am I doing wrong? Do I need to set any properties on the CALayer / AVPlayerLayer first?

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

int width = (int)[layer bounds].size.width;
int height = (int)[layer bounds].size.height;

CGContextRef context = CGBitmapContextCreate(NULL,
                                 width,
                                 height,
                                 8,
                                 width * 4,
                                 colorSpace,
                                 kCGImageAlphaPremultipliedLast);

CGColorSpaceRelease(colorSpace);

if (context== NULL) {
    ofLog(OF_LOG_ERROR, "getTextureFromLayer: failed to create context 1");
    return;
}

[[layer presentationLayer] renderInContext:context];

CGImageRef cgImage = CGBitmapContextCreateImage(context);

int bytesPerPixel   = CGImageGetBitsPerPixel(cgImage)/8;
if(bytesPerPixel == 3) bytesPerPixel = 4;

GLubyte *pixels     = (GLubyte *) malloc(width * height * bytesPerPixel);

CGContextRelease(context);
context = CGBitmapContextCreate(pixels,
                                width,
                                height,
                                CGImageGetBitsPerComponent(cgImage),
                                width * bytesPerPixel,
                                CGImageGetColorSpace(cgImage),
                                kCGImageAlphaPremultipliedLast);

if(context == NULL) {
    ofLog(OF_LOG_ERROR, "getTextureFromLayer: failed to create context 2");
    free(pixels);
    return;
}

CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), cgImage);

int glMode;
switch(bytesPerPixel) {
    case 1:
        glMode = GL_LUMINANCE;
        break;
    case 3: 
        glMode = GL_RGB;
        break;
    case 4: 
    default:
        glMode = GL_RGBA; break;
}

if(texture.bAllocated() == false || texture.getWidth() != width || texture.getHeight() != height) {
    NSLog(@"getTextureFromLayer: allocating texture %i, %i\n", width, height);
    texture.allocate(width, height, glMode, true);
}

// test texture
//        for(int i=0; i<width*height*4; i++) pixels[i] = ofRandomuf() * 255;

texture.loadData(pixels, width, height, glMode);

CGContextRelease(context);
CFRelease(cgImage);
free(pixels);

P.S. The variable 'texture' is a C++ opengl (-es compatible) texture object which I know works. If I uncomment the 'test texture' for-loop filling the texture with random noise, I can see that, so problem is definitely before.

UPDATE

In response to Nick Weaver's reply I tried a different approach, and now I'm always getting NULL back from copyNextSampleBuffer with status == 3 (AVAssetReaderStatusFailed). Am I missing something?

variables

AVPlayer                *videoPlayer;
AVPlayerLayer           *videoLayer;
AVAssetReader           *videoReader;
AVAssetReaderTrackOutput*videoOutput;

init

    videoPlayer = [[AVPlayer alloc] initWithURL:[NSURL fileURLWithPath:[NSString stringWithUTF8String:videoPath.c_str()]]];

    if(videoPlayer == nil) {
        NSLog(@"videoPlayer == nil ERROR LOADING %s\n", videoPath.c_str());
    } else {
        NSLog(@"videoPlayer: %@", videoPlayer);
        videoLayer = [[AVPlayerLayer playerLayerWithPlayer:videoPlayer] retain];
        videoLayer.frame = [ThreeDView instance].bounds;
        // [[ThreeDView instance].layer addSublayer:videoLayer]; // test to see if it's loading and running

        AVAsset *asset = videoPlayer.currentItem.asset;
        NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
        NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], (NSString*)kCVPixelBufferPixelFormatTypeKey, nil];

        videoReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
        videoOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:[tracks objectAtIndex:0] outputSettings:settings];
        [videoReader addOutput:videoOutput];
        [videoReader startReading];
    }

draw loop

    if(videoPlayer == 0) {
        ofLog(OF_LOG_WARNING, "Shot::drawVideo: videoPlayer == 0");
        return;
    }


    if(videoOutput == 0) {
        ofLog(OF_LOG_WARNING, "Shot::drawVideo: videoOutput == 0");
        return;
    }

    CMSampleBufferRef sampleBuffer = [videoOutput copyNextSampleBuffer];

    if(sampleBuffer == 0) {
        ofLog(OF_LOG_ERROR, "Shot::drawVideo: sampleBuffer == 0, status: %i", videoReader.status);
        return;
    }


    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CFRelease(sampleBuffer);

    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    unsigned char *pixels = ( unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer); 

    int width = CVPixelBufferGetWidth(imageBuffer); 
    int height = CVPixelBufferGetHeight(imageBuffer);

    if(videoTexture.bAllocated() == false || videoTexture.getWidth() != width || videoTexture.getHeight() != height) {
        NSLog(@"Shot::drawVideo() allocating texture %i, %i\n", width, height);
        videoTexture.allocate(width, height, GL_RGBA, true);
    }

    videoTexture.loadData(pixels, width, height, GL_BGRA);

    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

何必那么矫情 2024-11-10 07:05:30

我认为 iOS4:如何使用视频文件作为 OpenGL 纹理? 对你的问题会有帮助。

I think iOS4: how do I use video file as an OpenGL texture? will be helpful for your question.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文