由两个线程访问的共享队列的临界区代码是什么?

发布于 2024-11-26 06:08:42 字数 288 浏览 0 评论 0原文

假设我们有一个共享队列(使用数组实现),两个线程可以访问该队列,一个用于从中读取数据,另一个用于向其中写入数据。现在,我遇到了同步问题。我正在使用 Win32 API(EnterCriticalSection 等)来实现此功能。

但我好奇的是队列的入队和出队操作中的关键部分代码是什么?

仅仅因为两个线程正在使用共享资源?为什么我看不到任何问题是这样的:前端和后端都是维护的,所以,当ReaderThread读取时,它可以从前端读取,当WriterThread写入时,它可以轻松写入后端。

可能会出现哪些潜在问题?

Suppose we have a shared queue (implemented using an array), which two threads can access, one for reading data from it, and other for writing data to it. Now, I have a problem of synchronization. I'm implementing this using Win32 API's (EnterCriticalSection etc.).

But my curiosity is what will be the critical section code in enqueue and dequeue operations of the queue?

Just because, two threads are using a shared resource? Why I'm not able to see any problem is this: front and rear are maintained, so, when ReaderThread reads, it can read from front end and when WriterThread writes, it can easily write to rear end.

What potential problems can occur?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

罗罗贝儿 2024-12-03 06:08:42

对于单个生产者/消费者循环队列实现来说,实际上不需要锁。只需设置一个条件,如果队列已满,则生产者无法写入队列,如果队列为空,则消费者无法从队列中读取。此外,生产者将始终写入指向队列中第一个可用空槽的 tail 指针,而消费者将从代表第一个可用空槽的 head 指针中读取数据。队列中未读的槽。

您的代码可以类似于以下代码示例(注意:我假设在初始化的队列中 tail == head,并且两个指针都声明为 易失性,以便优化编译器不会重新排序给定线程内的操作顺序,在 x86 上,由于该体系结构具有强大的内存一致性模型,因此不需要内存屏障,但这在具有较弱内存一致性模型的其他体系结构上会发生变化,其中内存。需要障碍):

queue_type::pointer queue_type::next_slot(queue_type::pointer ptr);

bool queue_type::enqueue(const my_type& obj)
{
    if (next_slot(tail) == head)
        return false;

    *tail = obj;
    tail = next_slot(tail);

    return true;
}

bool queue_type::dequeue(my_type& obj)
{
    if (head == tail)
        return false;

    obj = *head;
    head = next_slot(head);

    return true;
}

函数next_slot 只是增加 headtail 指针,以便它返回指向数组中下一个槽的指针,并考虑任何数组包装 -围绕功能。

最后,我们保证单个生产者/消费者模型中的同步,因为在 tail 指针将数据写入它指向的槽之前,我们不会增加 tail 指针,并且我们不会增加 tail 指针。 head 指针,直到我们从它指向的槽中读取数据。因此,在至少调用一次 enequeue 之前,对 dequeue 的调用不会返回有效值,并且 tail 指针永远不会覆盖head 指针,因为在 enqueue 中进行检查。此外,只有一个线程正在递增 tail 指针,并且一个线程正在递增 head 指针,因此共享读取或写入同一线程不会出现问题。指针会产生同步问题,需要锁定或某种类型的原子操作。

For a single producer/consumer circular queue implementation, locks are actually not required. Simply set a condition where the producer cannot write into the queue if the queue is full and the consumer cannot read from the queue if it is empty. Also the producer will always write to a tail pointer that is pointing to the first available empty slot in the queue, and the consumer will read from a head pointer that represents the first unread slot in the queue.

You code can look like the following code example (note: I'm assuming in an initialized queue that tail == head, and that both pointers are declared volatile so that an optimizing compiler does not re-order the sequence of operations within a given thread. On x86, no memory barriers are required due to the strong memory consistency model for the architecture, but this would change on other architectures with weaker memory consistency models, where memory barriers would be required):

queue_type::pointer queue_type::next_slot(queue_type::pointer ptr);

bool queue_type::enqueue(const my_type& obj)
{
    if (next_slot(tail) == head)
        return false;

    *tail = obj;
    tail = next_slot(tail);

    return true;
}

bool queue_type::dequeue(my_type& obj)
{
    if (head == tail)
        return false;

    obj = *head;
    head = next_slot(head);

    return true;
}

The function next_slot simply increments the head or tail pointer so that it returns a pointer to the next slot in the array, and accounts for any array wrap-around functionality.

Finally, we guarantee synchronization in the single producer/consumer model because we do not increment the tail pointer until it has written the data into the slot it was pointing to, and we do not increment the head pointer until we have read the data from the slot it was pointing to. Therefore a call to dequeue will not return valid until at least one call to enequeue has been made, and the tail pointer will never over-write the head pointer because of the check in enqueue. Additionally, only one thread is incrementing the tail pointer, and one thread is incrementing the head pointer, so there are no issues with a shared read or write from or to the same pointer which would create synchronization problems necessitating a lock or some type of atomic operation.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文