没有互斥体的基于模数的负载平衡?

发布于 2024-11-04 21:20:07 字数 212 浏览 3 评论 0原文

我可能会搞错,但这是我的问题和建议的解决方案:

您有一个 50+ GB 的文件,其中包含数亿条独立记录,需要非常快速地处理。我当前的解决方案是每小时获取 7400 万条记录。我为 I/O 线程使用阻塞队列,每个工作线程都尝试从该队列中获取数据块。

由于 I/O 和工作线程之间的互斥争用,上述速度相当慢。

有没有办法在没有锁的情况下实现这种风格的生产者/消费者?

I might be going about this all wrong, but here's my problem and proposed solution:

You have a 50+ gigabyte file with hundreds of millions of independent records that need to be processed very quickly. My current solution is getting 74 million records / hour. I'm using a blocking queue for the I/O thread, and each worker thread tries to grab chunks of data from this queue.

The above is pretty slow due to mutex contention among I/O and worker threads.

Is there a way to do this style of producer/consumer without locks?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

错々过的事 2024-11-11 21:20:07

不要使用阻塞队列并让工作线程从中拉出,而是为每个线程提供自己的队列,并让 I/O 线程将批量工作推送到每个线程的队列中。

循环队列对此非常有用,假设您不介意花费额外的精力来实现某种方法来跟踪可以将多少项目推入每个队列;如果 I/O 线程读取新记录的速度比工作线程处理记录的速度快,则必须小心,不要覆盖未处理的记录。

确保记录不被覆盖的一种方法是让工作线程每隔一段时间发送一条消息来更新 I/O 线程,其中包含已处理的记录数。这种方法不需要锁定;只是经常更新 I/O 线程的原子操作。

除此之外,在将最后一批数据推入队列时,您还可以通过使用非阻塞 I/O 读取更多记录来获得更好的性能。它还有助于了解瓶颈是磁盘访问还是处理。

Rather than using a blocking queue and having the worker threads pull from it, give each thread it's own queue and have the I/O thread push batches of work into each thread's queue.

A circular queue would be very good for this, assuming you don't mind taking the extra effort to implement some way to keep track of how many more items can be pushed into each queue; you would have to be careful not to overwrite unprocessed records if the I/O thread is reading new records faster than the worker thread is processing them.

One way of ensuring that records aren't overwritten is having the worker threads send a message to update the I/O thread with how many records have been processed, every so often. This approach requires no locking; only an atomic operation to update the I/O thread every so often.

Aside from this, you may also be able to get some better performance by using non-blocking I/O to read more records while you push the last batch into the queues. It also helps to know if the bottleneck is disk access or processing.

一梦浮鱼 2024-11-11 21:20:07

单个读取器线程将可用大小的块放入消费者访问的队列中怎么样?或者让消费者将自己的 ID 放入队列中,文件读取器每次读取另一个块时都会从中提取该队列。后者可能不会经常阻塞读者。

What about a single reader thread putting workable sized chunks into a queue that consumers access. Or have the consumers put their own Ids into a queue that the file reader draws from everytime it reads another chunk. The latter would probably nOt block the reader often.

囚你心 2024-11-11 21:20:07

存在单生产者单消费者 (SPSC) 无锁队列。由此,您可以让生产者线程以循环方式将工作分派给每个工作人员(每个工作人员一个队列)。请注意,某些队列可能会满,在这种情况下(本轮)忽略它们。

关于IO:你真的可以分割文件吗?如果您有一种廉价的方法来检测记录的结尾,那么分割文件并将各个部分放在不同的机器上可能会很简单。或者直接购买更快的硬盘。

Single Producer Single Consumers (SPSC) lock-free queues exist. And from this you could have the producer thread dispatch the work to each of the workers in a round-robin fashion (one queue per worker). Beware that some queues can get full, and just ignore them in this case (for this round).

Regarding IO: can you actually split the file ? If you have a cheap way to detect the end of a record, it could be simple to split the file and put the various parts on different machines. Or just buy a faster HDD.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文