Android USB 主机 -bulkTransfer() 正在丢失数据

发布于 2025-01-01 10:29:31 字数 942 浏览 2 评论 0原文

我正在尝试从基于 FTDI 2232H 芯片的自定义设备接收数据。

我使用简单的异步 FIFO 模式,输入数据速率为 3.2MB/秒。

一切都与我的 PC 上的测试代码完美配合,但我在东芝 Thrive 上接收数据时遇到问题。

TDI 的 Android 驱动程序失败,因此我使用 Java 进行编码。

我可以完美地接收 95% 以上的数据,但每隔一段时间,数据就会“乱七八糟”,我会收到相同 4-5K 数据的部分两到三次,然后又恢复到良好的数据。

对于 Thrive 或 Android,我不会太快,因为我之前的数据传输速度是双倍(6.4MB/秒),而且它也获得了大约 95% 的数据。 (所以在一半的速率下应该没有问题。)

Android 中的缓冲(或双缓冲)似乎存在某种错误。 (它不是 FTDI 2232H 内的缓冲区,因为重复的数据大于芯片的 4K 内部缓冲区。)

设置代码很简单,而且它的工作也几乎完美。

发生数据抓取的循环非常简单:

while(!fStop)
  if(totalLen < BIG_BUFF_LEN-IN_BUFF_LEN)
  {
    len=conn.bulkTransfer(epIN, inBuff, IN_BUFF_LEN, 0);
    System.arraycopy(inBuff, 0, bigBuff, totalLen, len);
    totalLen+=len;
  }

如果您认为这是数组复制的时间延迟 - 即使我注释掉该行,我仍然会丢失数据。

IN_BUFF_LEN 是 16384(即使我增加 inBuff 的大小,bulkTransfer 也不会返回超过该值的值)。

bigBuff 有几兆字节。

作为第二个问题 - 有谁知道如何将指针传递给将直接填充bigBuff的bulkTransfer——在一个偏移量处(不是从位置“0”开始?

I'm trying to receive data from a custom device based on an FTDI 2232H chip.

I am using a simple Async FIFO mode, and the incoming data rate is 3.2MB/sec.

Everything works perfectly with test code on my PC, but I'm having problems receiving data on my Toshiba Thrive.

TDI's Android driver fails, so I am coding using Java.

I can receive 95%+ of the data perfectly, but every once in a while the data 'sputters' and I get portions of the same 4-5K of data two or three times, then back to good data.

I am not going too fast for the Thrive or Android, because I previously had the data coming in at double (6.4MB/sec) and it got about 95% of that as well. (So it should have no problem at half the rate.)

It seems like there is some sort of bug in the buffering (or double-buffering) that happens within Android. (It is not the buffer within the FTDI 2232H because the repeated data is larger than the chip's 4K internal buffer.)

The setup code is simple, and again it's working ~almost~ perfectly.

The loop where the data grab occurs is very simple:

while(!fStop)
  if(totalLen < BIG_BUFF_LEN-IN_BUFF_LEN)
  {
    len=conn.bulkTransfer(epIN, inBuff, IN_BUFF_LEN, 0);
    System.arraycopy(inBuff, 0, bigBuff, totalLen, len);
    totalLen+=len;
  }

In case you think it's the time delay for the arraycopy - I still lose the data even if I comment that line out.

The IN_BUFF_LEN is 16384 (bulkTransfer won't return more than that even if I increase the size of the inBuff).

The bigBuff is several megabytes.

As a secondary question - does anyone know how to pass a pointer to bulkTransfer that will populate bigBuff directly --- at an offset (not starting at position '0'?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

‖放下 2025-01-08 10:29:31

UsbConnection.bulktransfer(...) 有问题。使用 UsbRequest.queue(...) Api。

许多人报告说,直接使用批量传输会导致大约 1% 或 2% 的输入传输失败。

UsbConnection.bulktransfer(...) is buggy. Use UsbRequest.queue(...) Api.

Many people has reported that using bulktransfer directly fails around 1% or 2% of the input transfers.

蓦然回首 2025-01-08 10:29:31

只是为了澄清我尝试过的一些方法...USB 代码在它自己的线程中运行,并被赋予了最大优先级(不走运) - 我尝试了 API 调用、libUSB、本机 C 和其他方法(不走运) - 我缓冲、轮询和排队(不走运)——最终我认为 Android 无法以“高速”处理 USB 数据(恒定 3.2MB/秒,无流量控制)。我在设计中内置了一个 8MB 硬件 FIFO 缓冲区来弥补这一缺陷。 (如果你认为你有答案,请想出一些以 3.2MB/秒的速度输入数据的东西,看看 Android 是否可以毫无问题地处理它。我很确定它不能。)

Just to clarify a few of the approaches I tried...The USB code ran in it's own thread and was given max priority (no luck) - I tried API calls, libUSB, native C, and other methods (no luck) - I buffered, and polled, and queued (no luck) - ultimately I decided Android could not handle USB data at 'high speed' (constant 3.2MB/sec w/ no flow control). I built an 8MB hardware FIFO buffer into my design to make up for it. (If you think you have an answer, come up with something that feeds data in at 3.2MB/sec and see if Android can handle it without ANY hiccups. I'm pretty sure it can't.)

变身佩奇 2025-01-08 10:29:31

在 Nexus Media Importer 中,我可以持续推动大约 9MB/s,所以这是可能的。我不确定您是否可以控制源,但您可能希望使用某种排序的标头将提要分成 16K 块,以便您可以检测丢失的块和损坏。

另外,您没有检查 len < 0. 我不确定如果底层堆栈从另一端收到 NAK 或 NYET 会发生什么。我已经足够了解这一点了,我有恢复代码来处理这个问题。

我一直在努力寻找一种方法来抵消bulkTransfer目标缓冲区,但我还没有找到它。仅供参考:USBRequest.queue() 不尊重 ByteBuffer.position()。

无论如何,我有点惊讶我们可以在批量传输上进行 16K 传输。根据 USB 2.0 规范,bulkTransfer 端点的最大值应该为 512 字节。 Android 是否捆绑了bulkTransfers,还是我们违反了规则?

In Nexus Media Importer I can consistently push through about 9MB/s, so it is possible. I'm not sure if you have control of the source, but you may want to break the feed into 16K blocks with some sort of sequenced header so you can detect missing blocks and corruption.

Also, you are not checking for len < 0. I'm not sure what will have if the underlying stack gets a NAK or NYET from the other end. I get this enough that I have recovery code to handle this.

I have looked long and hard for a way to offset the bulkTransfer destination buffer, but I have yet to find it. FYI: USBRequest.queue() does not respect the ByteBuffer.position().

I'm kind of surprised we can do 16K on bulkTransfer anyway. According to the USB 2.0 spec, the max is supposed to be 512 bytes for a bulkTransfer endpoint. Is Android bundling the the bulkTransfers, or are we breaking the rules?

绻影浮沉 2025-01-08 10:29:31

您必须确保同一总线上没有其他流量的优先级高于您的流量。

You have to be sure that there are no other traffic - on the same bus - with higher priority than your traffic.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文