低延迟中断处理(从内核返回到用户空间的预期平均时间是?)

发布于 2024-09-24 04:08:32 字数 560 浏览 6 评论 0原文

我有一个光纤链路,带有专有的设备驱动程序。
该链路进入 PCIe 卡。在 RHEL 5.2 (2.6.18-128~) 上运行
我已经对卡上的接口进行了 mmap 处理,用于设置和 FIFO 访问等,这些读/写需要几微秒才能完成,所以一切都很好。

但当然不能将其用于中断,因此我必须使用提供的内核模块及其用户空间 lib 接口。

WaitForInterrupt(); // API lib interface to kernel module
// Interrupt occurs and am returned to my code in user space
time = CurrentTime() - LatchedTime(); // time to get to here

从 WaitForInterrupt() 返回大约需要 70μs。 (引发中断的时间被锁存在固件中,我读到了这个,正如我上面所说,需要大约 2μs,并将其与固件中的当前时间进行比较)

发生中断和用户空间 API 之间的预期访问时间是多少中断调用等待方法返回?

网络/其他高速接口占用?

I have a Fibre Optic link, with a proprietary Device Driver.
The link goes into a PCIe card. Running on a RHEL 5.2 (2.6.18-128~)
I have mmap'ed the interface on the card for setup and FIFO access etc, and these read/writes take a few µs to complete, so all good there.

But of course cannot use this for interrupts, so I have to use the kernel module provided, with its user-space lib interface.

WaitForInterrupt(); // API lib interface to kernel module
// Interrupt occurs and am returned to my code in user space
time = CurrentTime() - LatchedTime(); // time to get to here

It takes around 70µs to return from WaitForInterrupt(). (The time the interrupt is raised is latched in the firmware, I read this which as I say above takes ~2µs, and compare it against the current time in the firmware)

What are expected access times between an interrupt occurring and the User Space API interrupt call wait method returning?

Network/other-high-speed interfaces take?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

煞人兵器 2024-10-01 04:08:32

500 毫秒比用户空间/内核之间的简单切换要大很多个数量级,但正如有人在评论中提到的那样,Linux 不是实时操作系统,因此不能保证 500 毫秒“卡顿”不会发生。不时出现。

根本不可能说出罪魁祸首是什么,设备驱动程序可能只是试图捆绑数据以提高效率。

也就是说,我们在一些定制卡以及与 APIC 和 ACPI 的交互方面遇到了无尽的麻烦,需要在 BIOS 设置、什么卡插入哪个 PCI 插槽以及特定显卡是否搞砸了一切之间进行微妙的平衡 - 可能是导致一个可疑的驱动程序与或多或少有缺陷的BIOS/视频卡交互。

500ms is many orders of magnitudes larger than what a simple switch between userspace/kernel takes, but as someone mentioned in comments, linux is not a real time OS, so there's no guarantee 500ms "hickups" won't show up now and then.

It's quite impossible to tell what the culprit is, the device driver could simpliy be trying to bundle up data to be more efficient.

That said, we've had endless troubles with some custom cards and interactions with both APIC and ACPI, requireing a delicate balance of bios settings, what card goes into which PCI slot and whether a particular video card screws up everything - likely a cause of a dubious driver interacting with more or less buggy bios/video-cards..

二智少女 2024-10-01 04:08:32

如果您能够在负载不重的系统上可靠地超过 500us,我认为您正在寻找一个糟糕的驱动程序实现(或其用户空间包装器/对应部分)。

根据我的经验,中断时唤醒用户线程的延迟应小于 10us,尽管(正如其他人所说)Linux 不提供延迟保证。

If you're able to reliably exceed 500us on a system that's not heavily loaded, I think you're looking at a bad driver implementation (or its userspace wrapper/counterpart).

In my experience the latency to wake a user thread on interrupt should be less than 10us, though (as others have said) Linux provides no latency guarantees.

自找没趣 2024-10-01 04:08:32

如果您有最新的内核,则可以使用 perf sched 工具测量延迟,并查看时间被用在哪里。 (500us 听起来确实有点偏高,具体取决于您的处理器、正在运行的任务数量,...)

If you have a recent kernel, you can use the perf sched tool to measure the latency, and see where the time is being used. (500us does sound a tad on the high side, depending on your processor, how many tasks are running, ...)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文