高速机器人的线路线

发布于 2025-02-07 12:08:22 字数 121 浏览 2 评论 0 原文

如何仅使用摄像头传感器但是高速使用摄像机传感器的一个设计线路?我目前正在使用OpenCV库处理帧并从中计算转向角度。但是,在较高的速度下,由于路径迅速变化,给定方法不起作用。

PS是否有适合此应用程序的特定相机?

How can one design line following bot using only camera sensor but for high speeds? I am currently using opencv library to process frames and calculate steering angle from that. But at higher speeds, since the path changes rapidly the given approach does not work.

P.S. Is there a particular camera that works well with this application?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

葬花如无物 2025-02-14 12:08:23

它本来应该是一个复杂的系统,并且遵循良好科学出版物的线条。因此,这是一个非常简单的答案。

当您考虑算法时,重要的是要理解,这总是贸易。您可以使用小FPS和分辨率差的相机获得相机,但是随后您可以遵循带有大转弯射线的平滑线条。

或者,如果您需要沿着急转弯的曲线遵循一些疯狂的曲线,则应该获得良好的相机,或者可能是很少的相机。

首先,让我们假设以下条件:

  1. 您需要使用单相机或线路传感器的解决方案。
  2. 线条是深色,背景为浅色。
  3. 机器人速度为25mph。
  4. 我们也有两个参数:
    • 厚度 - 我们想最大化厚度变化。
    • 转弯半径 - 我们想最大程度地减少转弯半径。

我们去。

最重要的是,您的系统看起来像块,负反馈

在您的情况下:

  • 反馈是中心的A 线偏向 ...和机器人速度
  • 输出信号转向所需的机器人速度。我假设最好放慢机器人,然后失败并失去线路。

通常为了实现平稳的反应,您应该使用 pid控制器有时(在太空工业中) a href =“ https://en.wikipedia.org/wiki/bellman_equation” rel =“ nofollow noreferrer”> Bellman方程。

机器人

示意性的机器人可能是这样:

  1. 它接受行偏斜。
  2. 它通过转向pid 来计算所需的转向
  3. ,可能会降低速度,这就是为什么您可能需要SMTH,例如 max max速度计算器
  4. 然后知道最大速度和当前速度,您计算速度误差并将其发送到推力PID
  5. 推力PID 在输出上形成正信号或负信号,该信号告诉您的电动机(或更重要的是硬件电动机控制器)分别加速或断裂。
  6. 转向信号也引向转向伺服器。

现在有了这个模式,我们可以谈论算法。

据说快速相机将允许您遵循更清晰的转弯。
转弯半径也取决于机器人的其他物理特性:质量,轮胎。

相机传感器

如果我对机器人的理解是正确的,则相机传感器只是几个过滤器:

  1. 阈值。它可能是 otsu theSholding 或甚至可能是 noreflow noreferrer“两种方法都可以在不同的光条件下工作。


  2. 找到与 thres_binary_inv A>代替 thres_binary 在上一步。


  3. 选择 X 动量中心的坐标,并将其与框架中线进行比较:

    declination = x - frame_width/2

就是这样。

这组过滤器使用非常有效的卷积算法,即使在最古老的RPI版本上,也应以最小的延迟作用。

如果需要改进FPS,则可以从顶部和底部裁剪框架。

非常尖锐的转弯

可能会处理这样的解决方案,即使是这样的右角弯道:

    ------
   |
   |
   ^
<robot>

您需要的只是调整转向PID 参数。

但是它可能无法与您的转弯相处得很好,在摄像机捕获两个方向的情况下:

    --
   |  |
   |  | 
   ^
<robot>

在这种情况下,您应该降低相机的视角。

实际上,进一步的改进

是最近的情况,当您的相机看到整个掉头可能是一个优势时。实际上,您可以识别出的曲线越多,就可以越好计划机器人的动作。但是它需要更强大且昂贵的算法。

另外,您可以嵌入简单 lstm模型以识别一些危险的情况。

It supposed to be a complicated system, and line following worth a good science publication. So here's a very simplified answer.

When you think about algorithm it's important to understand, that it is always trade. You can get camera with small FPS and poor resolution, but then you can follow smoother lines with big turn radiuses.

Or if you need to follow some crazy curve with sharp turns, you should get good camera or may be few cameras.

First of all let's assume following conditions:

  1. You need solution with single camera or line sensor.
  2. Line is of a dark colour and background is light coloured.
  3. Robot speed as 25mph.
  4. We also have two parameters:
    • thickness - we want to maximize thickness variation.
    • turn radius - we want to minimize turn radius.

There we go.

On top level your system looks like block with negative feedback.

In your case:

  • feedback is a line declination from the center... and robot velocity.
  • output signals are steering and desired robot velocity. I assume better to slowdown robot then to fail and loose the line.

Usually in order to achieve smooth reaction you should use PID controllers and sometimes (in space industry) Bellman equations.

Robot

Schematically your robot might be like this:
enter image description here

  1. It accepts line declination.
  2. It calculates desired steering via Steering PID
  3. It might happen, that you should reduce velocity, this is why you might need smth like Max velocity calculator
  4. Then knowing max velocity and current velocity, you calculate velocity error and send it to Thrust PID.
  5. Thrust PID on output forms positive or negative signal which tells your motors (or rather hardware motor controller) to accelerate or break respectively.
  6. Also steering signal goes to steering servos.

Now having this schema at hands we can talk a little bit about algorithm.

As it was said fast camera will allow you to follow sharper turns.
BUT turn radius also depends on other physical properties of your robot: mass, wheel tires.

Camera sensor

If my understanding of your robot is correct, then camera sensor is just a few filters:

  1. Thresholding. It might be Otsu thresholding or may be even Adaptive thresholding. Both methods allow to work in different light conditions.

  2. Find center of line with moments filter. As long as line is black, you should invert frame. It might be achieved by passing THRES_BINARY_INV instead of THRES_BINARY on previous step.

  3. Pick x coordinate of momentum center, and compare it with frame mid line:

    declination = x - frame_width/2

That's it.

This set of filters uses very efficient convolution algorithms and should work with minimal delay even on oldest RPi versions.

If you need to improve FPS, you can crop your frame from top and bottom.

Very sharp turns

This solution might deal with even right-angled turns like this:

    ------
   |
   |
   ^
<robot>

All you need is to adjust Steering PID parameters.

But it might not work well with U turns where camera captured both directions:

    --
   |  |
   |  | 
   ^
<robot>

In this case you should reduce camera's angle of view.

Further improvements

In fact the recent case, when your camera can see whole U-turn might be an advantage. Actually the more curves you can recognize the better you can plan robot movements. But it requires more robust and expensive algorithms.

Also you can embed simple LSTM model to recognize some dangerous cases.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文