在没有条件变量的情况下实现 pthread 之间的阻塞

发布于 2024-12-11 20:51:26 字数 563 浏览 0 评论 0原文

我正在 Linux 上使用 pthreads 实现老板/工人设计模式。我想要一个老板线程不断检查工作,如果有工作,则唤醒正在睡觉的工作人员来完成工作。我的问题是:我应该使用什么类型的 IPC 同步/机制来实现我的 boss 线程移交给我的工作人员和我的工作人员醒来之间的最小延迟?

简单的解决方案是使用 Pthread 条件变量并在主线程中调用 pthread_cond_signal ,并在每个工作线程中调用 pthread_cond_wait ,但我想知道 有没有更快的东西可以用来实现阻塞和信号发送?例如,在主线程和工作线程之间使用管道会如何?

如何衡量一种 IPC 与另一种 IPC 的性能对比?例如,我看到了 pipeline() 和 fork() 的基准,但没有看到使用 pipeline() 作为线程间通信的基准。

如果我可以澄清我的问题中的任何内容,请告诉我!

编辑 作为我如何使用 pipeline() 在工作线程和老板线程之间实现阻塞的示例,工作线程将 read() 一个管道,并且由于它是空的,因此会阻塞该读取调用,直到老板调用 write( )在其上。

I'm implementing a boss/worker design pattern using pthreads on Linux. I want to have a boss thread that constantly checks for work, and if there is work, then wakes up a sleeping worker to do the work. My question is: what type of IPC synchronization/mechanism should I use to achieve the least latency between my boss thread handing off to my worker, and my worker waking up?

The easy solution is to use Pthread conditional variables and call pthread_cond_signal in the boss thread, and pthread_cond_wait in each of the worker threads, but I'm wondering
is there something faster that I can use to implement the blocking and signaling? For example, how would using pipes between the boss and worker threads fare?

how can I measure the performance of one type of IPC versus another? For example, I see benchmarks for pipe()'s and fork()'s, but nothing for using pipe()'s as an interthread communication.

Let me know if I can clarify anything in my questions!

EDIT
As an example of how I would use pipe()'s to implement blocking between my worker and boss threads, the worker thread would read() a pipe, and since it's empty would then block on that read call until the boss calls write() on it.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

み零 2024-12-18 20:51:26

pthread 的 glibc 实现使用低级“futex”锁来实现 pthread_cond_wait() / pthread_cond_signal()。 Futexes 被设计为快速同步原语,因此它们可能优于管道或类似方法(至少,使用管道需要在内核空间之间复制一个字节,而这对于 futexes 来说是不需要的)。

如果 pthread_cond_wait() / pthread_cond_signal() 很好地映射到您的问题(听起来确实如此),那么超越它们的唯一方法可能就是实现一些东西自己在 futexes 上(例如,如果您不使用线程取消的处理,则可以消除它)。

对您的应用程序进行基准测试可能是值得的 - 除非您的工作单元确实非常小,否则条件变量唤醒延迟不太可能占主导地位。

The glibc implementation of pthreads uses the low-level "futex" locks to implement pthread_cond_wait() / pthread_cond_signal(). Futexes were designed to be a fast synchronisation primitive, so these are likely to outperform pipes or similar methods (at the very least, using pipes requires copying a byte to and from kernel space that isn't needed for futexes).

If pthread_cond_wait() / pthread_cond_signal() map well onto your problem (and it sounds like they do), then the only way to outperform them is likely to be to implement something on futexes yourself (for example, you could eliminate the handling of thread cancellation if you do not use that).

It is probably worthwhile benchmarking your application - unless your work units are very small indeed, then the condition variable wakeup latency is unlikely to dominate.

任谁 2024-12-18 20:51:26

您首先应该做的是确定您需要更快的东西。由于 pthread 信号是使用 futex 实现的,其中 futex 代表快速用户空间互斥体,因此我认为您无法胜过它们。

如果您有等待线程,根据定义,您将必须唤醒它们,并且通过内核的往返行程将是您不需要的延迟的根源。

但你应该做的是真正考虑你的问题:

  • 如果你不断有工作要做,那么你的工作线程总是很忙。当之前的工作完成时,工作就会完成,并且您不关心延迟。

  • 如果重要的是Boss检测到事件和Worker开始工作之间的延迟,那么你为什么要使用Boss ->工人模式?

我的建议是当你真正需要它时寻找更快的东西,此时你可能会有一个更详细的问题要问。也许我错了,但看起来你正在尝试先发制人地优化,正如你可能知道的那样,这是万恶之源。当然,糟糕的设计可能会导致大量返工,但在这里,您正在处理实际设计决策的一个非常小的细节,即使用老板/工人模式。

使用 pthread_signal 或 semp_post() / sem_wait() 实现您的设计,然后查看您的延迟到底在哪里,以及它是否确实是一个问题。

What you should do first is being sure you need something faster. Since pthread signaling is implemented using futex, where futex stands for fast user space mutex, I don't think you can out perform them.

If you have waiting threads, by definition you will have to wake them up, and this round trip through the kernel will be the source of your unwanted latency.

But what you should do is really think about your problem :

  • if you constantly have work to do, then your worker thread is always busy. Work will be done when previous work is finished, and you don't care about the latency.

  • If what matters is the latency between the boss detecting an event and the worker starting to work, then why do you use a boss -> worker pattern ?

My advice would be to look for a faster thing when you really need it, at this time you will probably have a much mre detailed question to ask. Maybe I am wrong, but it looks like you are trying to optimize preemptively, which as you perhaps know is the root of all evil. Of course, bad design can lead to massive rework, but here you are dealing with a very small detail of your real design decision which is using a boss / worker pattern.

Implement your design with pthread_signal, or perhaps semp_post() / sem_wait(), and then look where your latency really is, and if it is really a problem.

絕版丫頭 2024-12-18 20:51:26

我猜信号和等待是最好的。大多数操作系统都能识别线程并让它们闲置直到中断到来。对于管道,工人必须不断醒来并检查管道的输出情况。我发现的最佳效率测试通常是使用 unix 命令来获取从开始到结束的运行时间(假设程序不打算在后台运行),设置一个脚本来执行一些操作次并进行比较。

I would guess signal and wait would be the best. Most OS recognize threads and can have them just idle until the interrupt comes. With pipes the worker would have to keep waking up and checking the pipe for output. The best testing I've found for efficiency has usually been using the unix command to get the running time from start to finish(assuming the program isn't meant to keep running in the background), set up a script to do it a few times and compare.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文