使用机器人上的 2 个固定摄像头进行避障

发布于 2024-12-06 20:43:07 字数 812 浏览 2 评论 0原文

我将开始研究一个机器人项目,该项目涉及一个移动机器人,该机器人安装了 2 个摄像头(1.3 MP),摄像头之间的距离固定为 0.5 m。我还有一些超声波传感器,但它们的范围只有 10 米,我的环境相当大(例如,以一个有许多柱子、盒子、墙壁等的大型仓库为例)。我的主要任务是识别障碍物,并找到机器人导航必须采取的大致“最佳”路线在一个“粗糙”的环境(底层一点也不光滑)。所有图像处理都不是在机器人上完成的,而是在配备 NVIDIA GT425 2Gb RAM 的计算机上完成的。

我的问题是:

  1. 我应该将相机安装在旋转支架上,以便它们以更广角拍照吗?

  2. 是否可以仅基于 2 个视图(其间距离如此之小)创建合理的 3D 重建?如果是这样,我可以在多大程度上使用它来避障和构建最佳路线?

  3. 如果可以对环境进行大致准确的 3D 表示,那么如何将其用于创建环境地图? (考虑下面的例子:机器人必须扫过相当大的区域,如果它不经过同一个地方(或路线)两次,那就是节能的;但是,当从一个方向进行 3D 重建时,它如何分辨如果它已经存在 如果它来自相反的方向)

我找到了这个响应 上类似的问题,但我仍然担心 3D 重建的准确性(例如,考虑到较小的分辨率和相机之间的距离,位于 100m 处的几个盒子)。

我刚刚开始为这个项目收集信息,所以如果您从事过类似的工作,请给我一些指导(和一些链接:D),告诉我应该如何完成这个特定的任务。

提前致谢, 塔马什

I will be start working on a robotics project which involves a mobile robot that has mounted 2 cameras (1.3 MP) fixed at a distance of 0.5m in between.I also have a few ultrasonic sensors, but they have only a 10 metter range and my enviroment is rather large (as an example, take a large warehouse with many pillars, boxes, walls .etc) .My main task is to identify obstacles and also find a roughly "best" route that the robot must take in order to navigate in a "rough" enviroment (the ground floor is not smooth at all). All the image processing is not made on the robot, but on a computer with NVIDIA GT425 2Gb Ram.

My questions are :

  1. Should I mount the cameras on a rotative suport, so that they take pictures on a wider angle?

  2. It is posible creating a reasonable 3D reconstruction based on only 2 views at such a small distance in between? If so, to what degree I can use this for obstacle avoidance and a best route construction?

  3. If a roughly accurate 3D representation of the enviroment can be made, how can it be used as creating a map of the enviroment? (Consider the following example: the robot must sweep an fairly large area and it would be energy efficient if it would not go through the same place (or course) twice;however when a 3D reconstruction is made from one direction, how can it tell if it has already been there if it comes from the opposite direction )

I have found this response on a similar question , but I am still concerned with the accuracy of 3D reconstruction (for example a couple of boxes situated at 100m considering the small resolution and distance between the cameras).

I am just starting gathering information for this project, so if you haved worked on something similar please give me some guidelines (and some links:D) on how should I approach this specific task.

Thanks in advance,
Tamash

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

枫以 2024-12-13 20:43:07

如果你想避障,使用超声波传感器可能是最简单的。如果机器人以适合人类环境的速度移动,那么它们 10m 的范围可以让您有足够的时间来停止机器人。请记住,没有任何系统可以保证您不会意外撞到某些东西。

(2) 是否可以仅基于 2 个视图(其间距离如此之小)创建合理的 3D 重建?如果是这样,我可以在多大程度上利用它来避障和构建最佳路线?

是的,这是可能的。看看 ROS 和他们的 vSLAM。 http://www.ros.org/wiki/vslamhttp://www.ros.org/wiki/slam_gmapping 将是许多可能资源中的两个。

但是,当从一个方向进行 3D 重建时,如果来自相反方向,如何判断它是否已经存在

好吧,您正在尝试在给定测量值和地图的情况下找到您的位置。这应该是可能的,并且地图从哪个方向创建并不重要。然而,存在闭环问题。因为您在尝试寻找路线的同时正在创建 3D 地图,所以您不知道自己是在新的地方还是在以前见过的地方。

结论
这是一项艰巨的任务!

事实上,它不止一个。首先,您有简单的避障功能(即不要开车撞到东西。)。然后你想做同步定位和建图(SLAM,阅读维基百科),最后你想做路径规划(即扫地而不覆盖区域两次)。

我希望这有帮助吗?

If you want to do obstacle avoidance, it is probably easiest to use the ultrasonic sensors. If the robot is moving at speeds suitable for a human environment then their range of 10m gives you ample time to stop the robot. Keep in mind that no system will guarantee that you don't accidentally hit something.

(2) It is posible creating a reasonable 3D reconstruction based on only 2 views at such a small distance in between? If so, to what degree I can use this for obstacle avoidance and a best route construction?

Yes, this is possible. Have a look at ROS and their vSLAM. http://www.ros.org/wiki/vslam and http://www.ros.org/wiki/slam_gmapping would be two of many possible resources.

however when a 3D reconstruction is made from one direction, how can it tell if it has already been there if it comes from the opposite direction

Well, you are trying to find your position given a measurement and a map. That should be possible, and it wouldn't matter from which direction the map was created. However, there is the loop closure problem. Because you are creating a 3D map at the same time as you are trying to find your way around, you don't know whether you are at a new place or at a place you have seen before.

CONCLUSION
This is a difficult task!

Actually, it's more than one. First you have simple obstacle avoidance (i.e. Don't drive into things.). Then you want to do simultaneous localisation and mapping (SLAM, read Wikipedia on that) and finally you want to do path planning (i.e. sweeping the floor without covering area twice).

I hope that helps?

债姬 2024-12-13 20:43:07
  1. 如果你的意思是每只眼睛独立旋转,我会说不。您将无法获得进行立体对应所需的精度,并使校准成为一场噩梦。但如果你想让机器人的整个“头部”旋转,那么这可能是可行的。但是你应该在关节上有一些好的编码器。

  2. 如果您使用 ROS,有一些工具可以帮助您将两个立体图像转换为 3D 点云。 http://www.ros.org/wiki/stereo_image_proc。在不同范围内,基线(相机之间的距离)和分辨率之间需要进行权衡。大基线 = 在远距离处具有更高的分辨率,但它也有一个大的最小距离。我认为静态立体声设备的精度不会超过几厘米。当您将机器人的位置不确定性考虑在内时,这种准确性只会变得更差。

    2.5。对于绘图和避障,我要做的第一件事就是分割地平面。地平面用于映射,上面的所有东西都是障碍物。查看PCL的一些点云操作功能: http://pointclouds.org/

  3. 如果你不能只需在机器人上放置平面激光器(例如 SICK 或 Hokuyo),然后我可能会尝试将 3D 点云转换为伪激光扫描,然后使用一些现成的 SLAM,而不是尝试进行视觉 SLAM。我认为你会得到更好的结果。

其他想法:
既然 Microsoft Kinect 已经发布,通常更容易(也更便宜)的是简单地使用它来获取 3D 点云,而不是制作实际的立体效果。

这个项目听起来很像 DARPA LAGR 计划。 (学习应用于地面机器人)。该计划已经结束,但您也许可以找到其中发表的论文。

  1. I'd say no if you mean each eye rotating independently. You won't get the accuracy you need to do the stereo correspondence and make calibration a nightmare. But if you want the whole "head" of the robot to pivot, then that may be doable. But you should have some good encoders on the joints.

  2. If you use ROS, there are some tools which help you turn the two stereo images into a 3d point cloud. http://www.ros.org/wiki/stereo_image_proc. There is a tradeoff between your baseline (the distance between the cameras) and your resolution at different ranges. large baseline = greater resolution at large distances, but it also has a large minimum distance. I don't think i would expect more than a few centimeters of accuracy from a static stereo rig. and this accuracy only gets worse when you compound there robot's location uncertainty.

    2.5. for mapping and obstacle avoidance the first thing i would try to do is segment out the ground plane. the ground plane goes to mapping, and everything above is an obstacle. check out PCL for some point cloud operating functions: http://pointclouds.org/

  3. if you can't simply put a planar laser on the robot like a SICK or Hokuyo, then i might try to convert the 3d point cloud into a pseudo-laser-scan then use some off the shelf SLAM instead of trying to do visual slam. i think you'll have better results.

Other thoughts:
now that the Microsoft Kinect has been released, it is usually easier (and cheaper) to simply use that to get a 3d point cloud instead of doing actual stereo.

This project sounds a lot like the DARPA LAGR program. (learning applied to ground robots). That program is over, but you may be able to track down papers published from it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文