由两个线程访问的共享队列的临界区代码是什么?
假设我们有一个共享队列(使用数组实现),两个线程可以访问该队列,一个用于从中读取数据,另一个用于向其中写入数据。现在,我遇到了同步问题。我正在使用 Win32 API(EnterCriticalSection 等)来实现此功能。
但我好奇的是队列的入队和出队操作中的关键部分代码是什么?
仅仅因为两个线程正在使用共享资源?为什么我看不到任何问题是这样的:前端和后端都是维护的,所以,当ReaderThread读取时,它可以从前端读取,当WriterThread写入时,它可以轻松写入后端。
可能会出现哪些潜在问题?
Suppose we have a shared queue (implemented using an array), which two threads can access, one for reading data from it, and other for writing data to it. Now, I have a problem of synchronization. I'm implementing this using Win32 API's (EnterCriticalSection etc.).
But my curiosity is what will be the critical section code in enqueue and dequeue operations of the queue?
Just because, two threads are using a shared resource? Why I'm not able to see any problem is this: front and rear are maintained, so, when ReaderThread reads, it can read from front end and when WriterThread writes, it can easily write to rear end.
What potential problems can occur?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对于单个生产者/消费者循环队列实现来说,实际上不需要锁。只需设置一个条件,如果队列已满,则生产者无法写入队列,如果队列为空,则消费者无法从队列中读取。此外,生产者将始终写入指向队列中第一个可用空槽的
tail
指针,而消费者将从代表第一个可用空槽的head
指针中读取数据。队列中未读的槽。您的代码可以类似于以下代码示例(注意:我假设在初始化的队列中
tail == head
,并且两个指针都声明为易失性
,以便优化编译器不会重新排序给定线程内的操作顺序,在 x86 上,由于该体系结构具有强大的内存一致性模型,因此不需要内存屏障,但这在具有较弱内存一致性模型的其他体系结构上会发生变化,其中内存。需要障碍):函数
next_slot
只是增加head
或tail
指针,以便它返回指向数组中下一个槽的指针,并考虑任何数组包装 -围绕功能。最后,我们保证单个生产者/消费者模型中的同步,因为在
tail
指针将数据写入它指向的槽之前,我们不会增加tail
指针,并且我们不会增加tail
指针。 head 指针,直到我们从它指向的槽中读取数据。因此,在至少调用一次enequeue
之前,对dequeue
的调用不会返回有效值,并且tail
指针永远不会覆盖head
指针,因为在enqueue
中进行检查。此外,只有一个线程正在递增tail
指针,并且一个线程正在递增head
指针,因此共享读取或写入同一线程不会出现问题。指针会产生同步问题,需要锁定或某种类型的原子操作。For a single producer/consumer circular queue implementation, locks are actually not required. Simply set a condition where the producer cannot write into the queue if the queue is full and the consumer cannot read from the queue if it is empty. Also the producer will always write to a
tail
pointer that is pointing to the first available empty slot in the queue, and the consumer will read from ahead
pointer that represents the first unread slot in the queue.You code can look like the following code example (note: I'm assuming in an initialized queue that
tail == head
, and that both pointers are declaredvolatile
so that an optimizing compiler does not re-order the sequence of operations within a given thread. On x86, no memory barriers are required due to the strong memory consistency model for the architecture, but this would change on other architectures with weaker memory consistency models, where memory barriers would be required):The function
next_slot
simply increments thehead
ortail
pointer so that it returns a pointer to the next slot in the array, and accounts for any array wrap-around functionality.Finally, we guarantee synchronization in the single producer/consumer model because we do not increment the
tail
pointer until it has written the data into the slot it was pointing to, and we do not increment thehead
pointer until we have read the data from the slot it was pointing to. Therefore a call todequeue
will not return valid until at least one call toenequeue
has been made, and thetail
pointer will never over-write thehead
pointer because of the check inenqueue
. Additionally, only one thread is incrementing thetail
pointer, and one thread is incrementing thehead
pointer, so there are no issues with a shared read or write from or to the same pointer which would create synchronization problems necessitating a lock or some type of atomic operation.