寻找机器人在房子里定位的方法
我正在破解一个吸尘器机器人,用微控制器(Arduino)来控制它。我想让它在打扫房间时更加高效。目前,它只是直行并在碰到物体时转向。
但我很难找到最佳算法或方法来了解其在房间中的位置。我正在寻找一种既便宜(不到 100 美元)又不太复杂(不需要计算机视觉博士学位论文)的想法。如果需要的话,我可以在房间里添加一些离散的标记。
现在,我的机器人有:
- 一个网络摄像头
- 三个接近传感器(大约 1 米范围)
- 指南针(暂时未使用)
- Wi-Fi
- 如果电池充满或几乎耗尽,其速度可能会有所不同
- 上网本 Eee PC 嵌入在机器人上
您有这样做的想法吗?对于此类问题是否存在任何标准方法?
注意:如果这个问题属于其他网站,请移走它,我找不到比 StackOverflow 更好的地方了。
I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
- One webcam
- Three proximity sensors (around 1 meter range)
- Compass (no used for now)
- Wi-Fi
- Its speed can vary if the battery is full or nearly empty
- A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(10)
确定机器人在其环境中的位置的问题称为本地化。计算机科学研究人员多年来一直试图解决这个问题,但成效有限。一个问题是,您需要相当好的感官输入才能确定自己所在的位置,而网络摄像头(即计算机视觉)的感官输入远未解决。
如果这没有吓到您:我发现最容易理解的本地化方法之一是粒子过滤。这个想法是这样的:
如果您稍微搜索一下,您会发现很多示例:例如机器人使用粒子过滤来确定其在小房间中的位置的视频。
粒子过滤很好,因为它很容易理解。这使得实施和调整它变得不那么困难。还有其他类似的技术(例如卡尔曼滤波器),它们在理论上可能更合理,但可能更难理解。
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
每个房间都有 QR 码海报不仅会成为一件有趣的现代艺术作品,而且相对容易被发现带着相机!
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
如果您可以在房间里放置一些标记,那么使用相机可能是一种选择。如果 2 个已知标记具有角位移(从左到右),则摄像机和标记位于一个圆上,该圆的半径与标记之间的测量角度相关。我不记得公式了,但是标记之间的弧段(在那个圆上)将是您看到的角度的两倍。如果标记的高度已知并且相机处于固定的倾斜角度,则可以计算到标记的距离。只要有足够的标记,这两种方法中的任何一种都可以确定您的位置。使用两者将有助于使用更少的标记来完成此任务。
不幸的是,由于测量误差,这些方法并不完善。您可以通过使用卡尔曼估计器合并多个噪声测量值来解决这个问题,以获得良好的位置估计 - 然后您可以输入一些航位推算信息(这也是不完美的)以进一步完善它。这部分非常深入地涉及数学,但我想说这是在你正在尝试的事情上做得很好的要求。没有它你也可以做得很好,但如果你想要一个最佳解决方案(就给定输入的最佳位置估计而言),没有更好的方法了。如果你真的想从事自主机器人领域的职业,这将在你的未来发挥重要作用。 (
一旦你确定了你的位置,你就可以按照你想要的任何模式覆盖房间。继续使用碰撞传感器来帮助构建障碍物地图,然后你需要设计一种方法来扫描合并障碍物。
不确定如果您还具备数学背景,那么这是这本书:
http://books.google.com/books/about/Applied_optimal_estimation.html ?id=KlFrn8lpPP0C
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
这并不能取代已接受的答案(这很好,谢谢!),但我可能建议您购买 Kinect 并使用它来代替网络摄像头,可以通过 Microsoft 最近发布的官方驱动程序,也可以使用破解的驱动程序(如果您的 EeePC 没有) Windows 7(大概没有)。
这样,3D 视觉就能改善定位。观察地标现在将告诉您该地标有多远,而不仅仅是该地标位于视野中的位置。
无论如何,公认的答案并没有真正解决如何挑选视野中的地标,并且只是假设你可以。虽然 Kinect 驱动程序可能已经包含特征检测(我不确定),但您也可以使用 OpenCV 来检测图像中的特征。
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
一种解决方案是使用类似于“洪水填充”的策略(wikipedia)。为了让控制器准确地执行扫描,它需要距离感。您可以使用接近传感器校准您的机器人:例如,运行电机 1 秒 = xx 接近度变化。有了这些信息,您就可以将机器人移动精确的距离,并继续使用洪水填充来清扫房间。
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
假设您不是在寻找通用解决方案,您实际上可能知道房间的形状、大小、潜在障碍物位置等。当机器人存在于工厂时,没有关于其未来操作环境的信息,这有点迫使它从一开始就效率低下。
如果是这种情况,您可以对该信息进行硬编码,然后使用基本测量(即轮子上的旋转编码器+指南针)来精确找出它在房间/房屋中的位置。在我看来,不需要 wifi 三角测量或疯狂的传感器设置。至少作为一个开始。
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
考虑过 GPS 吗?地球上的每个位置都有一个唯一的 GPS 坐标 - 分辨率为 1 到 3 米,使用差分 GPS 可以精确到 10 厘米以下的范围 - 更多信息请参见:
http://en.wikipedia.org/wiki/Global_Positioning_System
Arduino 确实有很多 GPS 模块选项:
http://www.arduino.cc/playground/Tutorials/GPS
收集完所有房子的关键坐标点,然后你可以为 arduino 编写例程,将机器人从一个点移动到另一个点(如上面收集的) - 假设它会做所有这些避障工作。
如需了解更多信息,请访问:
http://www.google。 com/search?q=GPS+localization+robots&num=100
在列表中我找到了这个 - 专门针对您的情况:Arduino + GPS + 本地化:
http://www.youtube.com/watch?v=u7evnfTAVyM
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
我也在思考这个问题。但我不明白为什么你不能只进行三角测量?伺服器上有两个或三个信标(例如不同频率的红外 LED)和一个红外旋转传感器“眼睛”。然后你就可以几乎持续地确定你的位置。我预计精度会在低厘米范围内,而且价格会很便宜。然后,您可以轻松绘制遇到的任何物体的地图。
也许您还可以利用信标光束的任何中断来绘制距离机器人很远的物体。
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
你说你有相机吗?你有没有考虑过看看天花板?两个房间具有相同尺寸的可能性很小,因此您可以识别自己所在的房间,可以根据到天花板边界的角距离计算房间中的位置,并且可以通过门的位置提取方向。
这将需要一些图像处理,但缓慢移动的真空吸尘器将有足够的时间进行计算以进行有效清洁。
祝你好运 !
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
使用超声波传感器 HC-SR04 或类似传感器。
如上所述,用传感器感测机器人与墙壁的距离,并用二维码感测房间部分的距离。
当你靠近墙壁时,转动 90 度并按照机器人的宽度移动,然后再次转动 90 度(即左转 90 度)并再次移动你的机器人,我认为这会有所帮助:)
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)