坐标变换 C++

发布于 2024-08-04 11:54:21 字数 664 浏览 3 评论 0原文

我有一个网络摄像头倾斜地指向桌子,我用它来跟踪标记。 我在OpenSceneGraph中有一个transformationMatrix,它的平移部分包含从跟踪对象到相机的相对坐标。 因为相机倾斜地指向,所以当我在桌子上移动标记时,Y 轴和 Z 轴都会更新,尽管我想要更新的是 Z 轴,因为标记的高度不会仅改变其到桌面的距离。相机。 当在 OpenSceneGraph 中的标记上投影模型时,模型会稍微偏离,并且当我移动周围的标记时,Y 和 Z 值更新不正确,这会产生影响。

所以我的猜测是我需要一个变换矩阵,将每个点相乘,这样我就有一个与桌面正交的新坐标系。 像这样的东西: A * v1 = v2 v1 是相机坐标,v2 是我的“表坐标” 所以我现在所做的是选择4个点来“校准”我的系统。因此,我将标记放置在屏幕的左上角,并将 v1 定义为当前相机坐标,将 v2 定义为 (0,0,0),并对 4 个不同的点执行了此操作。 然后利用未知矩阵和两个已知向量得到的线性方程求解该矩阵。

我认为我得到的矩阵值将是我需要与相机坐标相乘的值,以便模型可以在标记上正确更新。 但是,当我将之前收集的已知相机坐标与矩阵相乘时,我没有得到任何接近我的“表坐标”的值。

我的方法完全错误吗?我只是把方程弄乱了吗? (在 Wolframalpha.com 的帮助下解决)是否有更简单或更好的方法来做到这一点? 任何帮助将不胜感激,因为我有点迷失并且在一些时间压力下:-/ 谢谢, 大卫

I have a webcam pointed at a table at a slant and with it I track markers.
I have a transformationMatrix in OpenSceneGraph and its translation part contains the relative coordinates from the tracked Object to the Camera.
Because the Camera is pointed at a slant, when I move the marker across the table the Y and Z axis is updated, although all I want to be updated is the Z axis, because the height of the marker doesnt change only its distance to the camera.
This has the effect when when project a model on the marker in OpenSceneGraph, the model is slightly off and when I move the marker arround the Y and Z values are updated incorrectly.

So my guess is I need a Transformation Matrix with which I multiply each point so that I have a new coordinate System which lies orthogonal on the table surface.
Something like this: A * v1 = v2 v1 being the camera Coordinates and v2 being my "table Coordinates"
So what I did now was chose 4 points to "calibrate" my system. So I placed the marker at the top left corner of the Screen and defined v1 as the current camera coordinates and v2 as (0,0,0) and I did that for 4 different points.
And then taking the linear equations I get from having an unknown Matrix and two known vectors I solved the matrix.

I thought the values I would get for the matrix would be the values I needed to multiply the camera Coordinates with so the model would updated correctly on the marker.
But when I multiply the known Camera Coordinates I gathered before with the matrix I didnt get anything close to what my "table coordinates" were suposed to be.

Is my aproach completely wrong, did I just mess something up in the equations? (solved with the help of wolframalpha.com) Is there an easier or better way of doing this?
Any help would be greatly appreciated, as I am kind of lost and under some time pressure :-/
Thanks,
David

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

执手闯天涯 2024-08-11 11:54:21

当我在桌子上移动标记时,Y 轴和 Z 轴都会更新,尽管我想要更新的只是 Z 轴,因为标记的高度不仅仅改变它到相机的距离。

仅当相机的视角方向与 Y 轴(或 Z 轴)对齐时才有效。如果相机未与 Y 对齐,则意味着变换将围绕 X 轴应用旋转,从而修改标记的 Y 和 Z 坐标。

所以我的猜测是我需要一个变换矩阵,将每个点相乘,以便我有一个与桌面正交的新坐标系。

是的。之后,您将有 2 个转换:

  1. T_table 表示表参考中的标记坐标,
  2. T_camera 表示相机参考中的表坐标。

从单个 2d 图像中查找 T_camera 很困难,因为没有深度信息。

这被称为“姿势问题”——丹尼尔·德曼森等人对此进行了研究。他开发了一种快速而强大的算法来查找物体的姿势:

  • 其文章 研究主页,第 4 节“基于模型的物体姿势”(特别是“25 行代码中的基于模型的物体姿势”,1995 年);
  • 代码位于同一个地方,“POSIT(C 和 Matlab)”部分)”。

请注意,OpenCv 库提供了 DeMenthon 算法的实现。该库还提供了一个方便且易于使用的界面来从网络摄像头抓取图像。值得一试:OpenCv 主页

when I move the marker across the table the Y and Z axis is updated, although all I want to be updated is the Z axis, because the height of the marker doesnt change only its distance to the camera.

Only true when your camera's view direction is aligned with your Y axis (or Z axis). If the camera is not aligned with Y, it means the transform will apply a rotation around the X axis, hence modifying both the Y and Z coordinates of the marker.

So my guess is I need a Transformation Matrix with which I multiply each point so that I have a new coordinate System which lies orthogonal on the table surface.

Yes it is. After that, you will have 2 transforms:

  1. T_table to express marker's coordinates in the table referential,
  2. T_camera to express table coordinates in the camera referential.

Finding T_camera from a single 2d image is hard because there's no depth information.

This is known as the Pose problem -- it has been studied by -among others- Daniel DeMenthon. He developed a fast and robust algorithm to find the pose of an object:

  • articles available on its research homepage, section 4 "Model Based Object Pose" (and particularly "Model-Based Object Pose in 25 Lines of Code", 1995);
  • code at the same place, section "POSIT (C and Matlab)".

Note that the OpenCv library offers an implementation of the DeMenthon's algorithm. This library also offers a convenient and easy-to-use interface to grab images from a webcam. It's worth a try: OpenCv homepage

稳稳的幸福 2024-08-11 11:54:21

如果您知道四个标记在物理世界中的位置,并且记录了它们在相机上显示的位置,那么您应该能够导出某种变换。

当您进行校准时,您肯定希望将标记放在桌子的四个角而不是屏幕上吗?如果您只是处理屏幕的角落,我想您可能没有考虑到桌子的倾斜度。

桌子实际上只是相对于相机倾斜还是也旋转了?

If you know the location in the physical world of your four markers and you've recorded the positions as they appear on the camera, you ought to be able to derive some sort of transform.

When you do the calibration, surely you'd want to put the marker at the four corners of the table not the screen? If you're just doing the corners of the screen, I imagine you're probably not taking into acconut the slant of the table.

Is the table literally just slanted relative to the camera or is it also rotated at all?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文