我应该使用 NSOperation 还是 NSRunLoop?

发布于 2024-10-22 00:00:50 字数 636 浏览 9 评论 0原文

我正在尝试监视 FireWire 摄像机的视频输出流。我创建了一个带有按钮和 NSImageView 的 Interface Builder 界面。当图像监控在无限循环中进行时,我想:

  • 动态更改一些相机参数(增益、伽马等)
  • 告诉监控停止,以便我可以将图像保存到文件中(设置一个标志来停止监控) while循环)

使用按钮功能,我无法循环视频帧监视器,同时仍在寻找按钮按下(很像使用C中的按键功能。)有两个选项:

  1. 启动一个新的运行循环(为此我无法让自动释放池发挥作用...)
  2. 启动 NSOperation - 如何以允许我通过 Xcode 按钮推送进行连接的方式执行此操作?

关于此类对象的创建,文档非常晦涩难懂。如果我按照我找到的示例创建一个 NSOperation ,则似乎无法通过 Interface Builder 中的对象与其进行通信。当我创建 NSRunLoop 时,出现对​​象泄漏错误,并且找不到如何创建实际响应我创建的 RunLoop 的自动释放池的示例。没关系,我什至没有尝试选择由辅助运行循环采样哪些对象...

因为 Objective C(显然!)不是我的母语,我正在寻找一些简单的解决方案,抱歉地说... 提前致谢

I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:

  • change some camera parameters on the fly (gain, gamma, etc.)
  • tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)

Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:

  1. Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
  2. Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?

The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...

Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

最笨的告白 2024-10-29 00:00:50

我需要做的几乎与您完全相同,只是通过 FireWire 摄像头进行连续视频显示。就我而言,我使用 libdc1394 库 来执行我们的帧捕获和相机属性调整火线相机。我知道您也可以使用 Carbon Quicktime 的一些函数来完成此操作,但我发现 libdc1394 更容易理解一些。

对于视频捕获循环,我尝试了多种不同的方法,从轮询相机并在共享资源周围锁定的单独线程,到使用一个 NSOperationQueue 与相机交互,最后决定使用 CVDisplayLink 轮询相机以与屏幕刷新率相匹配的方式。

使用以下代码配置 CVDisplayLink:

CGDirectDisplayID   displayID = CGMainDisplayID();  
CVReturn            error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
    NSLog(@"DisplayLink created with error:%d", error);
    displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);  

并调用以下函数来触发新相机帧的检索:

static CVReturn renderCallback(CVDisplayLinkRef displayLink, 
                               const CVTimeStamp *inNow, 
                               const CVTimeStamp *inOutputTime, 
                               CVOptionFlags flagsIn, 
                               CVOptionFlags *flagsOut, 
                               void *displayLinkContext)
{
    return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}

使用以下代码启动和停止 CVDisplayLink:

- (void)startRequestingFrames;
{
    CVDisplayLinkStart(displayLink);    
}

- (void)stopRequestingFrames;
{
    CVDisplayLinkStop(displayLink);
}

每当我需要时,而不是在 FireWire 相机通信上使用锁定调整曝光、增益等。我更改相应的实例变量并在标志变量中设置适当的位以指示要更改哪些设置。下次检索帧时,CVDisplayLink 中的回调方法会更改相机上的相应设置以匹配本地存储的实例变量并清除该标志。

屏幕显示是通过 NSOpenGLView 处理的(CAOpenGLLayer 在以此速率更新时引入了太多视觉伪像,并且其更新回调在主线程上运行)。苹果有一些 您可以使用扩展来使用 DMA 将这些帧提供为纹理,以获得更好的性能。

不幸的是,我在这里描述的内容都不是入门级的内容。我在我们的软件中为这些相机处理功能编写了大约 2,000 行代码,花了很长时间才弄清楚。如果 Apple 可以将手动相机设置调整添加到 QTKit Capture API,我几乎可以删除所有这些。

I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.

For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.

The CVDisplayLink is configured using the following code:

CGDirectDisplayID   displayID = CGMainDisplayID();  
CVReturn            error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
    NSLog(@"DisplayLink created with error:%d", error);
    displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);  

and it calls the following function to trigger the retrieval of a new camera frame:

static CVReturn renderCallback(CVDisplayLinkRef displayLink, 
                               const CVTimeStamp *inNow, 
                               const CVTimeStamp *inOutputTime, 
                               CVOptionFlags flagsIn, 
                               CVOptionFlags *flagsOut, 
                               void *displayLinkContext)
{
    return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}

The CVDisplayLink is started and stopped using the following:

- (void)startRequestingFrames;
{
    CVDisplayLinkStart(displayLink);    
}

- (void)stopRequestingFrames;
{
    CVDisplayLinkStop(displayLink);
}

Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.

Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.

Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.

白况 2024-10-29 00:00:50

如果您想做的只是查看/获取连接的相机的输出,那么答案可能不是。

使用 QTKit 的 QTCaptureView。问题解决了。想要 抓取框架?也没有问题。不要尝试自己开发 - QTKit 的东西已经过优化并且是操作系统的一部分。我很确定你可以按照你想要的方式影响相机属性,但如果没有,B 计划应该可行。

计划 b:使用计划的、重复的 NSTimer 要求 QTKit 每隔一段时间抓取一个帧(上面链接的“如何”)并将图像操作应用于该帧(也许使用 核心图像),然后再显示在 NSImageView 中。

If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.

Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.

Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文