如何使用 posit 算法来校准投影仪和相机

发布于 2025-01-08 09:12:35 字数 350 浏览 0 评论 0原文

我正在尝试将 kinect 校准到投影仪。我读过微软研究部的几篇关于他们如何做到这一点的论文。

四个点必须由深度相机和 位于投影仪图像中,之后我们使用 POSIT 算法 [6]找到投影仪的位置和方向。这 过程需要焦距和投影中心 投影仪。

(这将给出投影仪的位置)

但是我真的不熟悉 posit 算法,当然也不熟悉它在这里的使用方式。 Posit 算法的结果是平移向量和旋转矩阵。现在我的问题是如何将其用于交互。

例如,如果我用 kinect 跟踪一只手,我会得到一些坐标 (x,y)。我如何使用所述平移和旋转矩阵来找到投影中相应的(x,y)坐标?

I'm trying to calibrate my kinect to a projector. I've read a few papers from microsoft research on how they do this.

four points must be correctly identified both by the depth cameras and
located in the projector image, after which we use the POSIT algorithm
[6] to find the position and orientation of the projector. This
process requires the focal length and center of projection of the
projector.

(this will give the position of the projector)

But I'm really not familiar with the posit algorithm and certainly not how it is used here. The result of the Posit algorithm is a translation vector and a rotation matrix. Now my question is how can this be used for interaction.

For example if i track a hand with the kinect i get some coordinates (x,y). How can i use said translation and rotation matrix to find the corresponding (x,y) coordinates in the projection ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

离线来电— 2025-01-15 09:12:35

基本上,POSIT 算法从至少四个非平面对应点估计物体相对于相机的位置。另一方面,投影仪可以被视为相机,因此如果您识别投影图像上真实物体的已知点,已知投影焦距,则应该可以计算相对位置。

因此,您应该做的是:

  1. 识别放置在投影仪前面的某个物体上的至少四个点。您可以使用 kinect 计算点坐标。

  2. 然后您应该按照与 3d 点相同的顺序在图像坐标系中识别投影图像上的这些点。

  3. 您可以使用 OpenCV 中的 cvPosit 函数来计算对象相对于相机的姿态。

  4. 与您使用 kinect 测量的 3d 空间中给定的某个对象相比,您可以应用 cvPOSIT 计算的变换来计算图像坐标。

算法使用的点可能需要满足一些特定条件,因此
有关 POSIT 的更深入解释,请参阅以下内容:
http://www.cfar.umd.edu/~daniel/daniel_papersfordownload/Pose25Lines .pdf

以下是opencv posit相关文档的链接:
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#posit

<一href="http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#createpositobject" rel="nofollow">http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#createpositobject

http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#releasepositobject

步骤 4澄清:

引用原始 POSIT 论文:
“POSIT算法找到将物体变换到相机坐标系上的平移向量和变换矩阵,使其特征点落在直线上
图像点的视线”

假设我们在 Kinect 坐标系中有 n 个 3d 点 (kPoints),我们有来自 POSIT 的旋转 (r[3][3]) 和平移 (t[3]),焦距投影仪图像平面,最后我们知道用于 POSIT 的第一个 3D 点 (kOrigin) 的坐标。
然后我们需要将点转换到 POSIT 坐标系中:

kPoints[i] = kPoints[i] - kOrigin;
kPoints[i] = Rotate(kPoints[i], r);
kPoints[i] = kPoints[i] + t;
imagePoint[i].x = focalLength * kPoints[i].x/kPoints[i].z
imagePoint[i].y = focalLength * kPoints[i].y/kPoints[i].z

Basically POSIT algorithm estimates the position of the object relative to camera from at least four non planar corresponding points. In the other hand the projector can be seen as a camera so if you identify the known points of the real object on the projected image, known the projection focal length, it should be possible to compute the relative position.

So what you should do is something like:

  1. Identify at least four points on some object placed in front of projector. You can calculate the points coordinates using kinect.

  2. Then you should identify those points on the projected image in the image coordinate system in the same order as 3d points.

  3. Than you can use cvPosit function from OpenCV which will calculate the pose of the object relative to the camera.

  4. Than given some object in the 3d space that you measure with kinect you can calculate the image coordinates applying the transformation computed by cvPOSIT.

There can be some specific conditions to be satisfied by the points used by algorithm, so
please see the following for deeper explanation of POSIT:
http://www.cfar.umd.edu/~daniel/daniel_papersfordownload/Pose25Lines.pdf

The following is the link to the opencv posit related documentation:
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#posit

http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#createpositobject

http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#releasepositobject

Step 4 clarification:

Quote from the original POSIT paper:
"The POSIT algorithm finds the translation vector and the transformation matrix that transform the object onto the camera coordinate system so that its feature points fall on the lines
of sight of the image points"

Assume we have n 3d points (kPoints) in the Kinect coordinate system, we have the Rotation (r[3][3]) and Translation (t[3]) from POSIT, focal length of the projector image plane and finally we know the coordinates of the first 3D point (kOrigin) we used with POSIT.
Then we need to translate our points to be in the POSIT coordinate system:

kPoints[i] = kPoints[i] - kOrigin;
kPoints[i] = Rotate(kPoints[i], r);
kPoints[i] = kPoints[i] + t;
imagePoint[i].x = focalLength * kPoints[i].x/kPoints[i].z
imagePoint[i].y = focalLength * kPoints[i].y/kPoints[i].z
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文