高速机器人的线路线
如何仅使用摄像头传感器但是高速使用摄像机传感器的一个设计线路?我目前正在使用OpenCV库处理帧并从中计算转向角度。但是,在较高的速度下,由于路径迅速变化,给定方法不起作用。
PS是否有适合此应用程序的特定相机?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
如何仅使用摄像头传感器但是高速使用摄像机传感器的一个设计线路?我目前正在使用OpenCV库处理帧并从中计算转向角度。但是,在较高的速度下,由于路径迅速变化,给定方法不起作用。
PS是否有适合此应用程序的特定相机?
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
接受
或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
发布评论
评论(1)
它本来应该是一个复杂的系统,并且遵循良好科学出版物的线条。因此,这是一个非常简单的答案。
当您考虑算法时,重要的是要理解,这总是贸易。您可以使用小FPS和分辨率差的相机获得相机,但是随后您可以遵循带有大转弯射线的平滑线条。
或者,如果您需要沿着急转弯的曲线遵循一些疯狂的曲线,则应该获得良好的相机,或者可能是很少的相机。
首先,让我们假设以下条件:
我们去。
最重要的是,您的系统看起来像块,负反馈。
在您的情况下:
通常为了实现平稳的反应,您应该使用 pid控制器有时(在太空工业中) a href =“ https://en.wikipedia.org/wiki/bellman_equation” rel =“ nofollow noreferrer”> Bellman方程。
机器人
示意性的机器人可能是这样:
现在有了这个模式,我们可以谈论算法。
据说快速相机将允许您遵循更清晰的转弯。
但转弯半径也取决于机器人的其他物理特性:质量,轮胎。
相机传感器
如果我对机器人的理解是正确的,则相机传感器只是几个过滤器:
阈值。它可能是 otsu theSholding 或甚至可能是 noreflow noreferrer“两种方法都可以在不同的光条件下工作。
找到与
thres_binary_inv
A>代替thres_binary
在上一步。选择
X
动量中心的坐标,并将其与框架中线进行比较:就是这样。
这组过滤器使用非常有效的卷积算法,即使在最古老的RPI版本上,也应以最小的延迟作用。
如果需要改进FPS,则可以从顶部和底部裁剪框架。
非常尖锐的转弯
可能会处理这样的解决方案,即使是这样的右角弯道:
您需要的只是调整转向PID 参数。
但是它可能无法与您的转弯相处得很好,在摄像机捕获两个方向的情况下:
在这种情况下,您应该降低相机的视角。
实际上,进一步的改进
是最近的情况,当您的相机看到整个掉头可能是一个优势时。实际上,您可以识别出的曲线越多,就可以越好计划机器人的动作。但是它需要更强大且昂贵的算法。
另外,您可以嵌入简单 lstm模型以识别一些危险的情况。
It supposed to be a complicated system, and line following worth a good science publication. So here's a very simplified answer.
When you think about algorithm it's important to understand, that it is always trade. You can get camera with small FPS and poor resolution, but then you can follow smoother lines with big turn radiuses.
Or if you need to follow some crazy curve with sharp turns, you should get good camera or may be few cameras.
First of all let's assume following conditions:
There we go.
On top level your system looks like block with negative feedback.
In your case:
Usually in order to achieve smooth reaction you should use PID controllers and sometimes (in space industry) Bellman equations.
Robot
Schematically your robot might be like this:

Now having this schema at hands we can talk a little bit about algorithm.
As it was said fast camera will allow you to follow sharper turns.
BUT turn radius also depends on other physical properties of your robot: mass, wheel tires.
Camera sensor
If my understanding of your robot is correct, then camera sensor is just a few filters:
Thresholding. It might be Otsu thresholding or may be even Adaptive thresholding. Both methods allow to work in different light conditions.
Find center of line with moments filter. As long as line is black, you should invert frame. It might be achieved by passing
THRES_BINARY_INV
instead ofTHRES_BINARY
on previous step.Pick
x
coordinate of momentum center, and compare it with frame mid line:That's it.
This set of filters uses very efficient convolution algorithms and should work with minimal delay even on oldest RPi versions.
If you need to improve FPS, you can crop your frame from top and bottom.
Very sharp turns
This solution might deal with even right-angled turns like this:
All you need is to adjust Steering PID parameters.
But it might not work well with U turns where camera captured both directions:
In this case you should reduce camera's angle of view.
Further improvements
In fact the recent case, when your camera can see whole U-turn might be an advantage. Actually the more curves you can recognize the better you can plan robot movements. But it requires more robust and expensive algorithms.
Also you can embed simple LSTM model to recognize some dangerous cases.