如何使用摄像机让机器人沿着路线行走

发布于 2024-12-20 09:51:15 字数 1436 浏览 1 评论 0 原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

别在捏我脸啦 2024-12-27 09:51:15

这是一种适合细化的方法。

通过缩放、像素化滤镜和阈值处理的组合,将相机输入减少为 3 x 3 的白色或黑色方块网格。通过适当的调整,减少应该能够放大线条,使其恰好占据减少的像素的三个。然后,机器人的逻辑包括沿八个方向之一移动以保持中心像素为黑色。

缩小后的图像:

☐ ■ ☐
☐ ■ ☐ ↑ move forward one unit
☐ ■ ☐

左转是什么样子:

☐ ☐ ☐
■ ■ ☐ ← turn 90 degrees left
☐ ■ ☐

这是一个非常简单的方案,将视频输入转换为干净的 3 x 3 网格并不是一项简单的任务,但它应该足以让您向右移动方向。

Here's one approach, suitable for refinement.

Through a combination of zooming, a pixellation filter, and thresholding, reduce the camera input to a 3 by 3 grid of white or black squares. With suitable adjustment, the reduction should be able to blow up the line such that it takes up exactly three of the reduced pixels. The robot's logic then consists of moving in one of eight directions to keep the center pixel black.

The image after reduction:

☐ ■ ☐
☐ ■ ☐ ↑ move forward one unit
☐ ■ ☐

What a left turn looks like:

☐ ☐ ☐
■ ■ ☐ ← turn 90 degrees left
☐ ■ ☐

This is a very simple scheme, and converting the video input to a clean 3 by 3 grid isn't a trivial task, but it should be enough to get you moving in the right direction.

写下不归期 2024-12-27 09:51:15

一种选择是使用 OpenCV 或类似的图像处理/视觉库以及向前和向下的相机来执行以下操作:

  1. 放置已知的、不寻常的和明亮的颜色(例如黄色或橙色)的磁带。如果您愿意在机器人上安装光源,您甚至可以使用反光胶带。
  2. 使用 OpenCV 将彩色视频图像分解为三个 HSV 平面 - 色调(颜色)、饱和度和值(强度)。
  3. 使用描边宽度变换来识别线条。以下 StackOverflow 帖子包含有关 SWT 的视频的链接:
    描边宽度变换 (SWT) 实现(Java、C#...)
  4. 应用细化算法(Stentiford 或Zhang-Suen)将分段胶带线减少到单个像素宽度。
  5. 将单个像素视为曲线拟合的输入点。或者,更简单地,计算直线中每组 N 个连续点从端点到端点的角度。
  6. 应用一些 3D 几何图形以及有关机器人当前方向和速度的信息来计算它必须转弯的时间以及转弯的速度,以便沿着路线行驶。

如果您的机器人移动缓慢,那么向下看的摄像头可能更合适。计算更容易,但机器人无法向前看那么远。

One option is to use OpenCV or a similar image processing / vision library together with camera looking forward and downward to do the following:

  1. Place a tape of a known, unusual, and bright color such as yellow or orange. If you're willing to mount a light source on your robot, you might even use retroreflective tape.
  2. Use OpenCV to break the color video image into three HSV planes - hue (color), saturation, and value (intensity).
  3. Use a Stroke Width Transform to identify the line. The following StackOverflow post has a link to a video about the SWT:
    Stroke Width Transform (SWT) implementation (Java, C#...)
  4. Apply a thinning algorithm (Stentiford or Zhang-Suen) to reduce the segmented tape line to a single pixel width.
  5. Treat the single pixels as input points for a curve fit. Or, more simply, calculate the angle from end point to end point for every group of N successive points in your line.
  6. Apply a little 3D geometry as well as information about your robot's current direction and speed to calculate the time at which it must turn, and how sharply, in order to follow the line.

If your robot moves slowly, then a camera looking downward might be more suitable. The calculations are easier, but the robot wouldn't be able to look ahead as far.

梦途 2024-12-27 09:51:15

您的机器人需要移动多快?有 AI(人工智能)选项(这比其他答案中描述的做出简单决策要慢。

在 AI 领域,您可以研究:

自组织地图(SOM)来尝试在黑线上进行推理。您 但取决于您的机器人硬件(不能完全记住)。

可以教它识别形状(我以前用它来识别字母,我认为这可以在现代计算机上相当快地计算, 培训一下,供大家学习。如果您想要一个固定的代码方式来执行此操作,这也是一个不错的选择。

How fast does your robot need to move? There are AI (Artificial Inteligence) options (which will be slower than making easy decisions as described in the other answer.

In the AI field, you could investigate:

Self organising maps (SOM) to try and do reasoning on the black line. You can teach it to identify shapes (I have used it to identify letters before). I think this can be calculated fairly quickly on a modern computer but depends on your robots hardware (can't entirely remember).

AI techniques take quite a while to train, and for you to learn. The other answer is also a good option if you wanted a fixed code way of doing it.

秋心╮凉 2024-12-27 09:51:15

嗯,您可以做很多事情。如果您不了解的话,我会首先阅读union-find 算法。根据您的环境,您可能可以采用 1) 对比度标准化或执行直方图均衡 在图像上,然后 2) 将大致符合线条颜色的所有像素添加到并集查找数据结构中。为了鲁棒性(如果你有彩色相机),我会选择一些显示效果非常强烈的东西,比如亮橙色,而不是黑色,其中不包含 hue 信息,将在任何地方被检测到。

从那里,您需要某种方法根据处理后的数据做出决策。您可以想象编写一个简单(尽管肯定不完美)的算法来猜测哪个图像片段是线,计算其方向,并计算控制器的转动方向。也许您扫描图像中的几条水平线,并将一条线拟合到过滤器的最大响应。这可能会给你带来大部分你想要的东西。也许您可以根据您的硬件想出更聪明的方法,或者例如强制该线始终从图像的底部延伸到某个高度。

如果您确实雄心勃勃且精通数学,则应该阅读(相机)镜头校准< /a>.本质上,通过将数学模型拟合到您的相机镜头并假设图像中的所有内容都位于机器人下方的地面上,您实际上可以算出相对于机器人的 3D 线条运行位置。不过,最后一步需要大量的知识,所以不要指望它很容易!

Well, there are a number of things you can do. I would start by reading about union-find algorithms, if you don't know them. Depending on your environment, you might be able to get away with 1) contrast-normalizing or performing histogram equalization on the image, then 2) adding all pixels roughly meeting your line color to a union find data structure. For robustness (if you have a color camera), I would pick something that shows up really strongly, like bright orange, instead of black, which contains no hue information and will be detected everywhere.

From there, you need some way to make decisions based on the processed data. You can imagine writing a simple (though surely imperfect) algorithm to guess which of the image segments is the line, calculate its orientation, and compute a turning direction for your controller. Perhaps you scan a few horizontal lines in the image and fit a line to the maximal responses from your filter. This might get you most of what you want. Maybe you can come up with something more clever based on your hardware, or, for example, enforce that the line always runs from the bottom of the image to some height.

If you're really ambitious and skilled with math, you should read up on (camera) lens calibration. Essentially, by fitting a mathematical model to your camera lens and assuming everything in the image is on the ground beneath the robot, you could actually figure out where the line runs in 3D with respect to your robot. This last step, though, will require a significant amount of knowledge, though, so don't expect it to be easy!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文