OpenCV 校准参数和立体相机的 3d 点变换
我有 4 个 ps3eye 相机。我已经使用OpenCV库的cvStereoCalibrate()函数校准了camera1和camera2 通过查找角并将其 3D 坐标传递到此函数中来使用棋盘图案。
此外,我还使用相机 2 和相机 3 查看的另一组棋盘图像校准了相机 2 和相机 3。
使用相同的方法我校准了camera3和camera4。
现在我有了camera1和camera2的外部和内部参数, 相机2和相机3的外部和内部参数, 以及camera3和camera4的外部和内部参数。
其中外部参数是旋转和平移矩阵,内部参数是焦距和主点矩阵。
现在假设有一个由camera3和camera4查看的3d点(世界坐标)(我知道如何从立体相机查找3d坐标),但camera1和camera2未查看该点。
我的问题是:如何获取由camera3和camera4查看的这个3d世界坐标点,并相对于camera1和camera2进行变换 使用旋转、平移、焦点和主点参数的世界坐标系?
I've 4 ps3eye cameras. And I've calibrated camera1 and camera2 using cvStereoCalibrate() function of OpenCV library
using a chessboard pattern by finding the corners and passing their 3d coordinates into this function.
Also I've calibrated camera2 and camera3 using another set of chessboard images viewed by camera2 and camera3.
Using the same method I've calibrated camera3 and camera4.
So now I've extrinsic and intrinsic parameters of camera1 and camera2,
extrinsic and intrinsic parameters of camera2 and camera3,
and extrinsic and intrinsic parameters of camera3 and camera4.
where extrinsic parameters are matrices of rotation and translation and intrinsic are matrices of focus length and principle point.
Now suppose there's a 3d point(world coordinate)(And I know how to find 3d coordinates from stereo cameras) that is viewed by camera3 and camera4 which is not viewed by camera1 and camera2.
The question I've is: How do you take this 3d world coordinate point that is viewed by camera3 and camera4 and transform it with respect to camera1 and camera2's
world coordinate system using rotation, translation, focus and principle point parameters?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
OpenCV 的立体校准仅提供两个相机之间的相对外部矩阵。
根据其文档,您无法获得世界坐标的转换(即与校准图案相关的转换)。它建议对其中一张图像进行常规相机校准,并至少知道其变换。 cv::stereoCalibrate
如果校准完美,您可以使用您的菊花链设置可以实现任何相机的世界变换。
据我所知,这不是很稳定,因为在运行校准时应该考虑您有多个相机的事实。
多相机校准并不是最微不足道的问题。看一下:
我也在寻找解决这个问题,所以如果您找到有关此问题和 OpenCV 的更多信息,请告诉我。
OpenCV's stereo calibration gives you only the relative extrinsic matrix between two cameras.
Acording to its documentation, you don't get the transformations in world coordinates (i.e. in relation to the calibration pattern ). It suggests though to run a regular camera calibration on one of the images and at least know its transformations. cv::stereoCalibrate
If the calibrations were perfect, you could use your daisy-chain setup to derive the world transformation of any of the cameras.
As far as I know this is not very stable, because the fact that you have multiple cameras should be considered when running the calibration.
Multi-camera calibration is not the most trivial of problems. Have a look at:
I'm also looking for a solution to this, so if you find out more regarding this and OpenCV, let me know.