多处理器编程:无锁堆栈

发布于 2024-09-29 17:27:40 字数 993 浏览 0 评论 0原文

为了准备即将到来的并发系统考试,我正在尝试完成教科书“多处理器编程的艺术”中的一些问题。有一个问题困扰着我:

练习 129:在 LockFreeStack 对象中使用相同的共享 BackOff 对象进行压入和弹出是否有意义?我们还能如何在 EliminationBackOffStack 中构建空间和时间的退避?

这个问题让我很烦恼,因为我首先想到的是它没有意义,因为退避对象所做的只是让进程等待,所以为什么不共享它呢?问题的第二部分完全让我困惑,非常欢迎任何帮助。

LockFreeStack的代码:

public class LockFreeStack<T> {

    AtomicReference<Node> top = new AtomicReference<Node>(null);

    static final int MIN_DELAY = ...;
    static final int MAX_DELAY = ...;
    Backoff backoff = new Backoff(MIN_DELAY, MAX_DELAY);

    protected boolean tryPush(Node node) {
        Node oldTop = top.get();
        node.next = oldTop;
        return(top.compareAndSet(oldTop, node));
    }

    public void push(T value) {
        Node node = new Node(value);
        while (true) {
            if (tryPush(node)) {
                return;
            } else {
                backoff.backoff();
            }
        }
    }

In preperation for my upcoming Concurrent Systems exam, I am trying to complete some questions from the text book "The Art of Multiprocessor Programming". One question is bugging me:

Exercise 129: Does it make sense to use the same shared BackOff object for both pushes and pop in our LockFreeStack object? How else could we structure the backoff in space and time in the EliminationBackOffStack?.

This question bugs me because the first thing that comes to my mind is that it does not make sense because all a backoff object does is make a process wait, so why not share it? The second part of the question totally eludes me and any help is most welcome.

The code for the LockFreeStack:

public class LockFreeStack<T> {

    AtomicReference<Node> top = new AtomicReference<Node>(null);

    static final int MIN_DELAY = ...;
    static final int MAX_DELAY = ...;
    Backoff backoff = new Backoff(MIN_DELAY, MAX_DELAY);

    protected boolean tryPush(Node node) {
        Node oldTop = top.get();
        node.next = oldTop;
        return(top.compareAndSet(oldTop, node));
    }

    public void push(T value) {
        Node node = new Node(value);
        while (true) {
            if (tryPush(node)) {
                return;
            } else {
                backoff.backoff();
            }
        }
    }

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

感情洁癖 2024-10-06 17:27:40

不确定我的漫谈是否有帮助,但无论如何我都会按“发布”按钮。

答案取决于 backoff() 的实现。由于这里的目标是避免同步,因此不会有任何本地存储——可能有一些在ThreadLocal中。如果退避算法使用随机发生器,它也需要可重入。因此,您很可能能够poppush之间共享它——现在您愿意吗?由于push和pop都试图改变top引用,所以如果退避给连续线程提供截然不同的数字,那就太好了。关于push 和pop 是否存在更多争议?我们是否需要更积极地放弃其中一种方法?如果这是一个通用堆栈,那么您将不知道。

至于如何“重组”退避,我也不确定。您能否利用成功的推送或弹出作为减少退避时间的机会?随机退避等待与 ThreadLocal 分配的序列中的素数之间有何区别?

Not sure if any of my ramblings help but I'll press the "Post" button anyway.

The answer depends on the implementation of backoff(). Since the goal here is to avoid synchronization, there won't be any local storage -- maybe some in a ThreadLocal. If the backoff algorithm uses a randomizer it will need to be reentrant as well. So most likely you are able to share it between pop and push -- now do you want to. Since push and pop are both trying to alter the top reference, it would be good if the backoff gave successive threads vastly different numbers. Is there more contention around push or pop? Do we need to be more aggressively backing off with one or the other method? If this is a generic stack then you won't know.

In terms of how the backoff can be "restructured", I'm also not sure. Could you use a successful push or pop as a opportunity to throttle back on the backoff times? How about the difference between random backoff waits versus prime numbers in a sequence assigned by ThreadLocal?

↘人皮目录ツ 2024-10-06 17:27:40

从同步的角度来看第一个问题,我相信允许相同的 BackOff 对象用于推送和弹出是有意义的。不管这个类的实现如何。这样做的原因是,如果我们有一个堆栈,我们必须在向堆栈中删除和添加项目时保持一致的状态。但是,如果我们只是进行查看(查看第一个元素或堆栈顶部),那么我们可能有多个 BackOff 对象查看它,因为读取不应锁定数据有问题的来源。第二个问题将要求发布该类的代码。

Approaching the first question from a synchronization stand point, I believe it would make sense to allow the same BackOff object for both pushes and pops. Regardless of the implementation of this class. The reason for this is that if we have a stack we must maintain a consistent state across the removal and addition of items to the stack. If however, we were only doing a peek (looking at the first element or top of the stack) than we could have more than one BackOff object looking at it as reads should not make a lock on the data source in question. The second question is going to require the code be posted for that class.

寂寞笑我太脆弱 2024-10-06 17:27:40

在这种情况下使用退避就太过分了。

任何现实世界的应用程序都应该花费更多的时间来处理节点,而不是将事物推入和推出堆栈。相比之下,堆栈操作应该非常短。由此可见,两个线程极不可能同时访问堆栈。但是,有时会发生这种情况,因此您需要编写正确的代码。

public void push(T value) { 
   Node node = new Node(value); 
   while (!tryPush(node)) 
       backoff.backoff(); 
 } 

恕我直言:可以缩短为

public void push(T value) { 
   Node node = new Node(value); 
   while (!tryPush(node));
} 

Using a backoff is over kill in this situation.

Any real world application should be spending more time processing the nodes than pushing things on and off the stack. The stack operation by comparison should be very short. It follows that two threads are highly unlikely to be accessing the stack at the same time. However, it will happen sometimes, so you need to code which is correct.

public void push(T value) { 
   Node node = new Node(value); 
   while (!tryPush(node)) 
       backoff.backoff(); 
 } 

IMHO: Can be shortened to

public void push(T value) { 
   Node node = new Node(value); 
   while (!tryPush(node));
} 
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文