为什么我的基于 QTKit 的图像编码应用程序这么慢?

发布于 2024-11-07 11:18:13 字数 1605 浏览 0 评论 0原文

在我当前正在编码的可可应用程序中,我从 Quartz Composer 渲染器(NSImage 对象)获取快照图像,我想使用 addImage 将它们编码在 QTMovie 中,尺寸为 720*480、25 fps 和 H264 编解码器: 方法。这是相应的代码段:

qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size


imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: @"avc1", // use the H264 codec
              QTAddImageCodecType, nil];

qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object

long fps = 25;
frameNum = 0;

NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
    if(![qRenderer renderAtTime: renderingTime arguments:NULL])
        NSLog(@"Rendering failed at time %.3fs", renderingTime);
    myImage = [qRenderer snapshotImage];
    [qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
    [myImage release];
    frameNum ++;
    renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release]; 

它可以工作,但是我的应用程序无法在我的新 MacBook Pro 上实时执行此操作,虽然我知道 QuickTime Broadcaster 可以以 H264 实时编码图像,其质量甚至比我在同一台计算机上使用的一个。

那么为什么呢?这里有什么问题?这是硬件管理问题(多核线程、GPU...)还是我遗漏了什么?让我先说一下,我是 Apple 开发世界的新手(已经练习了 2 周),无论是 Objective-C、cocoa、X-code、Quicktime 还是 Quartz Composer 库等。

感谢您的帮助

in a cocoa application I'm currently coding, I'm getting snapshot images from a Quartz Composer renderer (NSImage objects) and I would like to encode them in a QTMovie in 720*480 size, 25 fps, and H264 codec using the addImage: method. Here is the corresponding piece of code:

qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size


imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: @"avc1", // use the H264 codec
              QTAddImageCodecType, nil];

qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object

long fps = 25;
frameNum = 0;

NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
    if(![qRenderer renderAtTime: renderingTime arguments:NULL])
        NSLog(@"Rendering failed at time %.3fs", renderingTime);
    myImage = [qRenderer snapshotImage];
    [qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
    [myImage release];
    frameNum ++;
    renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release]; 

It works, however my application is not able to do that in real time on my new MacBook Pro, while I know that QuickTime Broadcaster can encode images in real time in H264 with an even higher quality that the one I use, on the same computer.

So why ? What's the issue here? Is this a hardware management issue (multi-core threading, GPU,...) or am I missing something? Let me preface that I'm new (2 weeks of practice) in the Apple development world, both in objective-C, cocoa, X-code, Quicktime and Quartz Composer libraries, etc.

Thanks for any help

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

披肩女神 2024-11-14 11:18:13

AVFoundation 是将 QuartzComposer 动画渲染为 H.264 视频流的更有效方法。


size_t width = 640;
size_t height = 480;

const char *outputFile = "/tmp/Arabesque.mp4";

QCComposition *composition = [QCComposition compositionWithFile:@"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
                                                      colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];

unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:@(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];

NSDictionary *videoSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : @(width), AVVideoHeightKey : @(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

[videoWriter addInput:writerInput];
[writerInput release];

AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];

int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

__block long frameNumber = 0;

dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);

NSLog(@"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
    while ([writerInput isReadyForMoreMediaData]) {
        NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
        if (![renderer renderAtTime:frameTime arguments:NULL]) {
            NSLog(@"Rendering failed at time %.3fs", frameTime);
            break;
        }

        CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:@"CVPixelBuffer"];
        [pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
        CFRelease(frame);

        frameNumber++;
        if (frameNumber >= totalFrameCount) {
            [writerInput markAsFinished];
            [videoWriter finishWriting];
            [videoWriter release];
            [renderer release];
            NSLog(@"Rendered %ld frames.", frameNumber);
            break;
        }

    }
}];

在我的测试中,这大约是您发布的使用 QTKit 的代码的两倍。最大的改进似乎来自 H.264 编码被移交给 GPU,而不是在软件中执行。快速浏览一下配置文件,发现剩下的瓶颈是合成本身的渲染,以及将渲染数据从 GPU 读回像素缓冲区。显然,你的作文的复杂性会对此产生一些影响。

可以通过使用 QCRenderer 提供快照作为 CVOpenGLBufferRef 的功能来进一步优化这一点,这可以将帧的数据保留在 GPU 上而不是将其读回将其交给编码器。不过我并没有对此看得太深。

AVFoundation is a more efficient way to render a QuartzComposer animation to an H.264 video stream.


size_t width = 640;
size_t height = 480;

const char *outputFile = "/tmp/Arabesque.mp4";

QCComposition *composition = [QCComposition compositionWithFile:@"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
                                                      colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];

unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:@(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];

NSDictionary *videoSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : @(width), AVVideoHeightKey : @(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

[videoWriter addInput:writerInput];
[writerInput release];

AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];

int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

__block long frameNumber = 0;

dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);

NSLog(@"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
    while ([writerInput isReadyForMoreMediaData]) {
        NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
        if (![renderer renderAtTime:frameTime arguments:NULL]) {
            NSLog(@"Rendering failed at time %.3fs", frameTime);
            break;
        }

        CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:@"CVPixelBuffer"];
        [pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
        CFRelease(frame);

        frameNumber++;
        if (frameNumber >= totalFrameCount) {
            [writerInput markAsFinished];
            [videoWriter finishWriting];
            [videoWriter release];
            [renderer release];
            NSLog(@"Rendered %ld frames.", frameNumber);
            break;
        }

    }
}];

In my testing this is around twice as fast as your posted code that uses QTKit. The biggest improvement appears to come from the H.264 encoding being handed off to the GPU rather than being performed in software. From a quick glance at a profile it appears that the remaining bottlenecks are the rendering of the composition itself, and reading the rendered data back from the GPU in to a pixel buffer. Obviously the complexity of your composition will have some impact on this.

It may be possible to further optimize this by using QCRenderer's ability to provide snapshots as CVOpenGLBufferRefs, which may keep the frame's data on the GPU rather than reading it back to hand it off to the encoder. I didn't look too far in to that though.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文