使用 NSConditionLock 同步 3 个共享缓冲区的线程。很难

发布于 2024-10-17 23:15:42 字数 915 浏览 2 评论 0原文

我有 3 个线程(除了主线程)。线程读取、处理和写入。它们各自对多个缓冲区执行此操作,然后循环并重复使用这些缓冲区。以这种方式设置的原因是程序可以在其中一项任务运行时继续执行其他任务。因此,例如,当程序写入磁盘时,它可以同时读取更多数据。

问题是我需要同步所有这些,以便处理线程不会尝试处理尚未填充新数据的缓冲区。否则,处理步骤有可能处理缓冲区之一中的剩余数据。

读取线程将数据读入缓冲区,然后将缓冲区标记为数组中的“新数据”。所以,它的工作原理如下:

//set up in main thread
NSConditionLock *readlock = [[NSConditionLock alloc] initWithCondition:0];

//set up lock in thread
[readlock lockWhenCondition:buffer_new[current_buf]];

//copy data to buffer
memcpy(buffer[current_buf],source_data,data_length);

//mark buffer as new (this is reset to 0 once the data is processed)
buffer_new[current_buf] = 1;

//unlock
[readlock unlockWithCondition:0];

我使用 buffer_new[current_buf] 作为 NSConditionLock 的条件变量。如果缓冲区未标记为新缓冲区,则相关线程将锁定,等待前一个线程写入新数据。这部分似乎工作正常。

主要问题是我需要在两个方向同步。如果读取线程由于某种原因花费了太长的时间,并且处理线程已经完成了对所有缓冲区的处理,则处理线程需要等待,反之亦然。

我不确定 NSConditionLock 是否是执行此操作的适当方法。

I have 3 threads (in addition to the main thread). The threads read, process, and write. They each do this to a number of buffers, which are cycled through and reused. The reason it's set up this way is so the program can continue to do the other tasks while one of them is running. So, for example, while the program is writing to disk, it can simultaneously be reading more data.

The problem is I need to synchronize all this so the processing thread doesn't try to process buffers that haven't been filled with new data. Otherwise, there is a chance that the processing step could process leftover data in one of the buffers.

The read thread reads data into a buffer, then marks the buffer as "new data" in an array. So, it works like this:

//set up in main thread
NSConditionLock *readlock = [[NSConditionLock alloc] initWithCondition:0];

//set up lock in thread
[readlock lockWhenCondition:buffer_new[current_buf]];

//copy data to buffer
memcpy(buffer[current_buf],source_data,data_length);

//mark buffer as new (this is reset to 0 once the data is processed)
buffer_new[current_buf] = 1;

//unlock
[readlock unlockWithCondition:0];

I use buffer_new[current_buf] as a condition variable to NSConditionLock. If the buffer isn't marked as new, then the thread in question will lock, waiting for the previous thread to write new data. That part seems to work okay.

The main problem is I need to sync this in both directions. If the read thread happens to take too long for some reason and the processing thread has already finished with processing all the buffers, the processing thread needs to wait and vice-versa.

I'm not sure NSConditionLock is the appropriate way to do this.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

明媚如初 2024-10-24 23:15:42

我想彻底改变这一点。正如你所说,线程很难,线程的多路同步更难。基于队列的并发通常更加自然。

定义三个队列;读队列、写队列和处理队列。然后采用一项规则,规定任何缓冲区不得同时排队到多个队列中

也就是说,缓冲器可以被排队到读队列上,并且一旦完成读取,就排队到处理队列中,并且一旦完成处理,就排队到写队列中。

如果需要,您可以使用一堆缓冲区,但通常情况下,与处理成本相比,分配成本相当便宜,因此 enqueue-for-read 也可以进行分配,而 dequeue-once-writing 可以进行分配自由的。

使用 GCD 进行编码非常简单。请注意,如果您确实想要并行性,那么您的各种队列实际上只是节流阀,使用信号量(可能是共享的)将工作排队到全局并发队列。

另请注意,与您当前使用的设计相比,此设计具有明显的优势,因为它不使用锁。唯一的锁隐藏在 GCD API 下方,作为队列管理的一部分,但这对您的代码来说实际上是不可见的。

I'd turn this on its head. As you say, threading is hard and multi-way synchronization of threads is even harder. Queue based concurrency is often much more natural.

Define three queues; a read queue, a write queue and a processing queue. Then employ a rule stating that no buffer shall be enqueued in more than one queue at a time.

That is, a buffer may be enqueued onto the read queue and, once done reading, enqueued into the processing queue, and once done processing, enqueued into the write queue.

You could use a stack of buffers if you want but, typically, the cost of allocation is pretty cheap compared to the cost of processing and, thus, enqueue-for-read could also do the allocation while dequeue-once-written could do the free.

This would be pretty straightforward to code with GCD. Note that if you really want parallelism, your various queues would really just be throttles, using semaphores -- potentially shared -- to enqueue the work to the global concurrent queues.

Note also that this design has a distinct advantage over what you are currently using in that it uses no locks. The only locks are hidden below the GCD APIs as a part of queue management, but that is effectively invisible to your code.

甜味拾荒者 2024-10-24 23:15:42

您看过Apple 并发编程指南吗?< /a>

它推荐了几种摆脱线程和锁并发模型的首选方法。例如,使用操作队列不仅可以减少和简化代码,加快开发速度并为您提供更好的性能。

有时你需要使用线程,你已经有了正确的想法。您将需要不断添加锁,每增加一个锁都会变得更加复杂,直到您无法理解自己的代码。然后你就可以开始在随机的地方加锁了。那你就完蛋了。

阅读并发指南,然后遵循 bbum 的建议。

Have you seen then Apple Concurrency Programming Guide ?

It recommends several preferable methods for moving away from a Threads and Locks concurrency model. Using Operation Queues for example can not only reduce and simplify your code, speed up your development and give you better performance.

Sometimes you need to use threads, and you already have the correct idea. You will need to keep adding locks, and with each it will get exponentially more complicated until you can't understand your own code. Then you can start adding locks at random places. Then you're screwed.

Read the concurrency guide, then follow bbum's advice.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文