相机同步 OpenCV
我有立体摄像系统。 在我的程序中,我通过两个线程捕获每个摄像机的图像。 (每个相机一个线程)。 当我收到来自每个相机的图像后,我想用 OpenCV 处理它们。我如何对我的程序说,两个相机线程都获得了图像,我可以去处理它们?
我还有一个问题。从相机接收到的每个帧都有一个由相机指定的时间戳。如何匹配时间戳,以便从两个摄像机同时拍摄图像?
I have stereo cameras systems.
In my program I catch the images from each camera in two threads. (one thread per camera).
After I receive the images from each camera, I want to process them with OpenCV. How I can say to my program, that the both camera threads got images and I can go to process them?
I have another question. Every received frame from the camera has a timestamp, which is specified by camera. How I can match the timestamp, so that I get images from two cameras, which were caught at the same time?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您是否曾经使用 OpenCV 编写过应用程序来显示相机捕获的帧?从那里开始。下面的应用程序执行此操作,并将每个帧转换为其灰度版本:
请记住,这些帧是在循环内检索的,如果退出循环,您将停止从相机接收数据。这是有道理的,对吧?这给您留下了 2 个选择:
以正确的方式处理帧。但如果此处理速度很慢,您可能会错过相机中的一些帧,直到下一个 cvQueryFrame() 操作。
使用某种缓冲区机制存储帧,以便您可以在另一个线程上进行处理。如果您的处理技术对 CPU 要求很高并且您不想丢失任何帧,那么这是一个很好的方法。
关于你的第二个问题,我不清楚你的意思。请进一步详细说明。
Have you ever wrote an application using OpenCV to display the frames captured by the camera? Start from there. The application below does that and convert each frame to it's grayscale version:
Keep in mind that the frames are being retrieved inside a loop, and if you quit the loop you'll stop receiving data from the camera. It makes sense, right? This leaves you with 2 options:
Process the frame right way. But if this processing is slow, you'll probably miss a few frames from the camera until the next cvQueryFrame() operation.
Store the frame using some buffer mechanism so you can do the processing on another thread. This is a good approach if your processing technique is demanding on the CPU and if you don't want to loose any frames.
About your second question, its not clear to me what you mean. Please elaborate further.