如何计算两个相机之间的旋转和平移?

发布于 2024-10-30 08:18:42 字数 95 浏览 1 评论 0原文

我知道棋盘相机校准技术,并且已经实现了。

如果我有两个摄像机观看同一场景,并且我使用棋盘技术同时校准两个摄像机,我可以计算它们之间的旋转矩阵和平移向量吗?如何?

I am aware of the chessboard camera calibration technique, and have implemented it.

If I have 2 cameras viewing the same scene, and I calibrate both simultaneously with the chessboard technique, can I compute the rotation matrix and translation vector between them? How?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

欢烬 2024-11-06 08:18:42

如果你有对应点的3D相机坐标,你可以通过刚体变换计算最佳旋转矩阵和平移向量

If you have the 3D camera coordinates of the corresponding points, you can compute the optimal rotation matrix and translation vector by Rigid Body Transformation

饭团 2024-11-06 08:18:42

如果您已经在使用 OpenCV 那么为什么不使用cv::stereoCalibrate

它返回旋转和平移矩阵。您唯一需要做的就是确保两个摄像头都能看到校准棋盘。

确切的方法在 OpenCV 库提供的 .cpp 示例中显示(我有 2.2 版本,示例默认安装在 /usr/local/share/opencv/samples 中)。

该代码示例称为stereo_calib.cpp。尽管没有清楚地解释他们在那里做什么(为此您可能需要查看“学习 OpenCV”),但这是您可以基于的东西。

If You are using OpenCV already then why don't you use cv::stereoCalibrate.

It returns the rotation and translation matrices. The only thing you have to do is to make sure that the calibration chessboard is seen by both of the cameras.

The exact way is shown in .cpp samples provided with OpenCV library( I have 2.2 version and samples were installed by default in /usr/local/share/opencv/samples).

The code example is called stereo_calib.cpp. Although it's not explained clearly what they are doing there (for that You might want to look to "Learning OpenCV"), it's something You can base on.

月棠 2024-11-06 08:18:42

如果我理解正确的话,你有两个校准过的相机观察一个共同的场景,并且你希望恢复它们的空间排列。这是可能的(只要您找到足够的图像对应),但仅限于平移尺度上的未知因素。也就是说,我们可以恢复旋转(3 个自由度,DOF)和仅平移的方向(2 DOF)。这是因为我们无法判断投影场景是大而摄像机远,还是场景小而摄像机近。在文献中,5 DOF 排列被称为相对姿势相对方向(Google 是你的朋友)。
如果您的测量准确且处于一般位置,6 点对应可能足以恢复唯一的解决方案。 一种相对较新的算法正是这样做的。

Nister, D.,“五点相对姿势问题的有效解决方案”,模式分析和机器智能,IEEE Transactions,第 26 卷,第 6 期,第 756,770 页,2004 年 6 月
doi:10.1109/TPAMI.2004.17

If I understood you correctly, you have two calibrated cameras observing a common scene, and you wish to recover their spatial arrangement. This is possible (provided you find enough image correspondences) but only up to an unknown factor on translation scale. That is, we can recover rotation (3 degrees of freedom, DOF) and only the direction of the translation (2 DOF). This is because we have no way to tell whether the projected scene is big and the cameras are far, or the scene is small and cameras are near. In the literature, the 5 DOF arrangement is termed relative pose or relative orientation (Google is your friend).
If your measurements are accurate and in general position, 6 point correspondences may be enough for recovering a unique solution. A relatively recent algorithm does exactly that.

Nister, D., "An efficient solution to the five-point relative pose problem," Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.26, no.6, pp.756,770, June 2004
doi: 10.1109/TPAMI.2004.17

谜兔 2024-11-06 08:18:42

更新:

使用运动/捆绑调整包中的结构,例如Bundler同时求解场景的 3D 位置和相关相机参数。

任何此类包都需要多个输入:

  1. 您拥有的相机校准。
  2. 相机中兴趣点的 2D 像素位置(使用 Harris、DoG(SIFT 的第一部分)等兴趣点检测)。
  3. 每个相机的兴趣点之间的对应关系(使用 SIFT、SURF、SSD 等描述符来进行匹配)。

请注意,该解决方案具有一定程度的模糊性。因此,您需要提供摄像机之间或场景中一对对象之间的距离测量。

原始答案(主要适用于未校准的相机,正如评论中指出的那样):

这个

Hartley 和 Zisserman 蓝皮书 也是一个很好的参考。特别是,您可能想查看有关极线和基本矩阵的章节,该章节可通过链接免费在线查看。

Update:

Use a structure from motion/bundle adjustment package like Bundler to solve simultaneously for the 3D location of the scene and relative camera parameters.

Any such package requires several inputs:

  1. camera calibrations that you have.
  2. 2D pixel locations of points of interest in cameras (use a interest point detection like Harris, DoG (first part of SIFT)).
  3. Correspondences between points of interest from each camera (use a descriptor like SIFT, SURF, SSD, etc. to do the matching).

Note that the solution is up to a certain scale ambiguity. You'll thus need to supply a distance measurement either between the cameras or between a pair of objects in the scene.

Original answer (applies primarily to uncalibrated cameras as the comments kindly point out):

This camera calibration toolbox from Caltech contains the ability to solve and visualize both the intrinsics (lens parameters, etc.) and extrinsics (how the camera positions when each photo is taken). The latter is what you're interested in.

The Hartley and Zisserman blue book is also a great reference. In particular, you may want to look at the chapter on epipolar lines and fundamental matrix which is free online at the link.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文