如何在Android上高效存储传入的数据流?

发布于 2024-11-14 02:07:38 字数 307 浏览 3 评论 0原文

我正在通过蓝牙将 Android 设备连接到嵌入式数据采集系统。 DAQ 系统将从 50Hz 到可能高达 880Hz(未来可能更高)的数据采样,并在收集数据时或以更快的采样率将其推送到 Android 设备。

有很多关于如何管理蓝牙连接的示例,但关于如何处理数据的示例却很少。

我需要将数据保存到某种长期存储中,并且能够在较长时间内以更高的采样率持续执行此操作。

我知道要在 UI 线程之外执行此操作,因此无需对此喋喋不休。 Android 上的哪种存储介质可以足够快地响应以跟上这些传入数据? SQLite 数据库足够快吗?看起来它很快就会陷入困境。

I'm connecting an Android device to an embedded Data Acquisition system via Bluetooth. The DAQ system will take data samples from 50Hz up to potentially 880Hz (possibly more in the future) and push it to the android device either as the data is collected or in bundles at faster sample rates.

There are plenty of examples of how to manage the Bluetooth connection, but not so much on what to do with the data.

I need to persist the data to some kind of long term storage and be able to do this continually at these higher sample rates for an extended period of time.

I know to do this off the UI thread, so no need to harp on that. What storage medium on Android can respond fast enough to keep up with this incoming data? Would the SQLite database be fast enough? Seems like it would bog down fairly quickly.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

别挽留 2024-11-21 02:07:38

我知道这是一个非常古老的问题,但我想无论如何我都会给出一个答案。 SQLite 应该可以工作。将数据缓冲到一定长度的字节数组中是一个好主意,具体取决于您要存储的数据。一旦该数组已满,请将其插入 SQL 数据库并将所有新数据接受到新数组中。一旦满了,将其存储到数据库中;如此不断。通过这种方式,您可以实现某种双缓冲。数据库修改涉及磁盘I/O;您会使用缓冲区,因为将大块数据写入磁盘比写入大量小数据块更有效。

I know this is a very old question but I thought I'd throw up an answer anyway. SQLite should work. It would be a good idea to buffer the data into an array of bytes of a certain length, depending on what data exactly you are storing. Once that array is full, insert that into the SQL database and accept all new data into a new array. Once that one is full, store it to the database; and on and on like that. In this way you could achieve a sort of double-buffering. Database modification involves disk I/O; you would use the buffer because it's more efficient to write a big chunk of data to the disk than it is to write a lot of smaller chunks.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文