如果外部和内部参数已知,则从 2D 图像像素获取 3D 坐标

发布于 2024-12-11 01:00:27 字数 650 浏览 0 评论 0原文

我正在使用 tsai 算法进行相机校准。我获得了内在矩阵和外在矩阵,但如何根据该信息重建 3D 坐标?

输入图片这里的描述

我现在有2种方法来找到X,Y,Z:

  1. 我可以使用高斯消元法来找到X,Y,Z,W然后点将是X / W,Y/W ,Z/W为齐次系统。

  2. 我可以使用OpenCV文档方法:< /p>

    在此处输入图像描述

    据我所知,uvRt,我可以计算X, Y,Z

然而,这两种方法最终都会得到不正确的不同结果。

我做错了什么?

I am doing camera calibration from tsai algo. I got intrinsics and extrinsics matrices, but how can I reconstruct the 3D coordinates from that information?

enter image description here

I have now 2 ways to find X,Y,Z:

  1. I can use Gaussian Elimination for find X,Y,Z,W and then points will be X/W , Y/W , Z/W as homogeneous system.

  2. I can use the OpenCV documentation approach:

    enter image description here

    As I know u, v, R , t , I can compute X,Y,Z.

However both methods end up in different results that are not correct.

What am I'm doing wrong?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

喜爱纠缠 2024-12-18 01:00:27

如果你有外部参数,那么你就得到了一切。这意味着您可以从外部获得单应性(也称为 CameraPose)。位姿是一个 3x4 矩阵,单应性是一个 3x3 矩阵,H 定义为

H = K*[r1, r2, t],       //eqn 8.1, Hartley and Zisserman

K 是相机固有矩阵,r1r2 为旋转矩阵的前两列,Rt 是平移向量。

然后将所有内容除以 t3 进行归一化。

r3 列会发生什么情况,我们不使用它吗?不,因为它是多余的,因为它是姿势的前 2 列的叉积。

现在您已经有了单应性,请投影点。你的 2d 点是 x,y。添加它们 az=1,所以它们现在是 3d 的。将它们投影如下:

p          = [x y 1];
projection = H * p;                   //project
projnorm   = projection / p(z);      //normalize

If you got extrinsic parameters then you got everything. That means that you can have Homography from the extrinsics (also called CameraPose). Pose is a 3x4 matrix, homography is a 3x3 matrix, H defined as

H = K*[r1, r2, t],       //eqn 8.1, Hartley and Zisserman

with K being the camera intrinsic matrix, r1 and r2 being the first two columns of the rotation matrix, R; t is the translation vector.

Then normalize dividing everything by t3.

What happens to column r3, don't we use it? No, because it is redundant as it is the cross-product of the 2 first columns of pose.

Now that you have homography, project the points. Your 2d points are x,y. Add them a z=1, so they are now 3d. Project them as follows:

p          = [x y 1];
projection = H * p;                   //project
projnorm   = projection / p(z);      //normalize
往日 2024-12-18 01:00:27

正如上面评论中明确指出的,将 2D 图像坐标投影到 3D“相机空间”本质上需要弥补 z 坐标,因为该信息在图像中完全丢失。一种解决方案是在投影之前为每个 2D 图像空间点分配一个虚拟值 (z = 1),如 Jav_Rock 所回答的。

p          = [x y 1];
projection = H * p;                   //project
projnorm   = projection / p(z);      //normalize

这种虚拟解决方案的一个有趣的替代方案是训练一个模型,在重新投影到 3D 相机空间之前预测每个点的深度。我尝试了这种方法,并使用在 KITTI 数据集中的 3D 边界框上训练的 Pytorch CNN 获得了高度成功。很乐意提供代码,但在这里发布会有点冗长。

As nicely stated in the comments above, projecting 2D image coordinates into 3D "camera space" inherently requires making up the z coordinates, as this information is totally lost in the image. One solution is to assign a dummy value (z = 1) to each of the 2D image space points before projection as answered by Jav_Rock.

p          = [x y 1];
projection = H * p;                   //project
projnorm   = projection / p(z);      //normalize

One interesting alternative to this dummy solution is to train a model to predict the depth of each point prior to reprojection into 3D camera-space. I tried this method and had a high degree of success using a Pytorch CNN trained on 3D bounding boxes from the KITTI dataset. Would be happy to provide code but it'd be a bit lengthy for posting here.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文