pyserial/python和实时数据采集

发布于 2024-12-07 13:27:51 字数 326 浏览 0 评论 0原文

我有一个红外摄像机/跟踪器,我通过串行端口与其进行通信。我目前正在使用 pyserial 模块来执行此操作。相机以 60 Hz 的速率更新跟踪对象的位置。为了获取跟踪对象的位置,我执行一个 pyserial.write(),然后使用 pyserial.read(serialObj.inWaiting()) 监听传入的回复。一旦收到回复/位置,就会重新进入 while 循环,依此类推。我的问题与这种方法的可靠性和速度有关。我需要计算机以至少 60Hz 的速率获取位置(然后该位置将通过 UDP 发送到实时操作系统)。这是 Pyserial/Python 能够做到的事情还是我应该考虑替代的基于 C 的方法?

谢谢, 卢克

I have an infrared camera/tracker with which I am communicating via the serial port. I'm using the pyserial module to do this at the moment. The camera updates the position of a tracked object at the rate of 60 Hz. In order to get the position of the tracked object I execute one pyserial.write() and then listen for an incoming reply with pyserial.read(serialObj.inWaiting()). Once the reply/position has been received the while loop is reentered and so on. My question has to do with the reliability and speed of this approach. I need the position to be gotten by the computer at the rate of at least 60Hz (and the position will then be sent via UDP to a real-time OS). Is this something that Pyserial/Python are capable of or should I look into alternative C-based approaches?

Thanks,
Luke

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

〆凄凉。 2024-12-14 13:27:51

这更多的是延迟问题而不是速度问题。

Python总是执行内存分配和释放,但如果数据被重用,相同的内存将被C库重用。
因此操作系统(C 库/UDP/IP 堆栈)将比 Python 本身产生更大的影响。

我真的认为你应该在 RTOS 机器上使用串行端口并使用 C 代码和预分配的缓冲区。

This is more a matter of latency than speed.

Python always performs memory allocation and release, but if the data is reused, the same memory will be reused by the C library.
So the OS (C library / UDP/IP stack) will have more impact than Python itself.

I really think you should use a serial port on your RTOS machine and use C code and pre-allocated buffers.

故人如初 2024-12-14 13:27:51

Python应该保持良好状态,但最好的办法是确保监控每秒获得的读取次数。计算每秒完成的读取次数,如果该数字太低,则写入性能日志或类似日志。您还应该考虑将 I/O 部分与 python 程序的其余部分(如果有的话)解耦,因为 pyserial 读取调用是阻塞的。

Python should keep up fine, but the best thing to do is make sure you monitor how many reads per second you are getting. Count how many times the read completed each second, and if this number is too low, write to a performance log or similar. You should also consider decoupling the I/O part from the rest of your python program (if there is one) as pyserial read calls are blocking.

谁把谁当真 2024-12-14 13:27:51

我怀疑 Python 能够很好地跟上数据。我的建议是尝试一下,如果 Python 看起来很滞后,那么尝试 PyPy —— Python 的一种实现,它将大部分内部循环编译为机器代码,以达到与 C 接近的速度。

http://pypy.org/

I would suspect that Python will keep up with the data just fine. My advice would be to try it, and if Python appears to lag, then try PyPy instead — an implementation of Python that compiles most of your inner loops down to machine code for speed close so that of C.

http://pypy.org/

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文