如何修改 GLCameraRipple 示例以在后台线程上处理?

发布于 2025-01-02 08:24:20 字数 684 浏览 1 评论 0原文

我正在尝试修改 GLCameraRipple 示例应用程序来自 Apple,用于在后台线程上处理视频帧。在此示例中,它使用以下代码处理主线程上的每个帧:

// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

如果我更改此代码以在后台线程中处理:

dispatch_queue_t videoQueue = dispatch_queue_create("com.test.queue", NULL);
[dataOutput setSampleBufferDelegate:self queue:videoQueue];

则程序崩溃。

当我尝试创建第二个具有共享功能的 EAGLContext 时(如 Apple 文档中所述),我只能看到绿屏或黑屏。

如何修改此示例应用程序以在后台线程上运行?

I'm trying to modify the GLCameraRipple sample application from Apple to process video frames on a background thread. In this example, it handles each frame on the main thread using the following code:

// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

If I change this code to process in a background thread:

dispatch_queue_t videoQueue = dispatch_queue_create("com.test.queue", NULL);
[dataOutput setSampleBufferDelegate:self queue:videoQueue];

then program crashes.

When I try to create a second EAGLContext with sharing, as specified in Apple's documentation, then I only see a green or black screen.

How can I modify this sample application to run on a background thread?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

梦魇绽荼蘼 2025-01-09 08:24:20

在我修改了示例之后,这实际上相当有趣。这里的问题在于 CVOpenGLESTextureCacheCreateTextureFromImage() 函数。如果您在获得绿色纹理时查看控制台,您将看到类似以下内容的记录:

CVOpenGLESTextureCacheCreateTextureFromImage 错误 -6661

根据标题(目前我可以找到有关这些新函数的文档的唯一位置), 是一个 kCVReturnInvalidArgument 错误。该函数的参数之一显然有问题。

事实证明,问题出在 CVImageBufferRef 上。看起来在处理此纹理缓存更新的块正在发生时,它正在被释放或以其他方式更改。

我尝试了几种方法来解决这个问题,最终使用了调度队列和调度信号量,就像我在这个答案中描述的那样,让委托仍然在主线程上回调,并在委托内执行类似以下操作:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection
{
    if (dispatch_semaphore_wait(frameRenderingSemaphore, DISPATCH_TIME_NOW) != 0)
    {
        return;
    }

    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    CFRetain(pixelBuffer);

    dispatch_async(openGLESContextQueue, ^{
        [EAGLContext setCurrentContext:_context];

        // Rest of your processing

        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        CFRelease(pixelBuffer);

        dispatch_semaphore_signal(frameRenderingSemaphore);
    });
}

通过在主线程上创建 CVImageBufferRef,锁定它指向的字节并保留它,然后将其交给异步块,这似乎解决了这个错误。显示此修改的完整项目可以从此处下载。

我应该在这里说一件事:这似乎不会给你带来任何好处。如果您查看 GLCameraRipple 示例的设置方式,就会发现应用程序中最繁重的操作(波纹效果的计算)已被分派到后台队列。这还使用新的快速上传路径向 OpenGL ES 提供相机数据,因此在主线程上运行时这不是瓶颈。

在双核 iPhone 4S 上的 Instruments 分析中,我发现此示例应用程序的原始版本与在后台队列上运行帧上传的修改版本在渲染速度或 CPU 使用率方面没有显着差异。尽管如此,诊断这个问题仍然很有趣。

This was actually fairly interesting, after I tinkered with the sample. The problem here is with the CVOpenGLESTextureCacheCreateTextureFromImage() function. If you look at the console when you get the green texture, you'll see something like the following being logged:

Error at CVOpenGLESTextureCacheCreateTextureFromImage -6661

-6661, according to the headers (the only place I could find documentation on these new functions currently), is a kCVReturnInvalidArgument error. Something's obviously wrong with one of the arguments to this function.

It turns out that it is the CVImageBufferRef that is the problem here. It looks like this is being deallocated or otherwise changed while the block that handles this texture cache update is happening.

I tried a few ways of solving this, and ended up using a dispatch queue and dispatch semaphore like I describe in this answer, having the delegate still call back on the main thread, and within the delegate do something like the following:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection
{
    if (dispatch_semaphore_wait(frameRenderingSemaphore, DISPATCH_TIME_NOW) != 0)
    {
        return;
    }

    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    CFRetain(pixelBuffer);

    dispatch_async(openGLESContextQueue, ^{
        [EAGLContext setCurrentContext:_context];

        // Rest of your processing

        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        CFRelease(pixelBuffer);

        dispatch_semaphore_signal(frameRenderingSemaphore);
    });
}

By creating the CVImageBufferRef on the main thread, locking the bytes it points to, and retaining it, then handing it off to the asynchronous block, that seems to fix this error. A full project that shows this modification can be downloaded from here.

I should say one thing here: this doesn't appear to gain you anything. If you look at the way that the GLCameraRipple sample is set up, the heaviest operation in the application, the calculation of the ripple effect, is already dispatched to a background queue. This is also using the new fast upload path for providing camera data to OpenGL ES, so that's not a bottleneck here when run on the main thread.

In my Instruments profiling on a dual-core iPhone 4S, I see no significant difference in rendering speed or CPU usage between the stock version of this sample application and my modified one that runs the frame upload on a background queue. Still, it was an interesting problem to diagnose.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文