相互竞争的原子操作会导致彼此挨饿吗?

发布于 2024-08-20 12:08:45 字数 621 浏览 10 评论 0原文

想象一个有两个线程的程序。他们正在运行以下代码(CAS指的是比较和交换):

// Visible to both threads
static int test;

// Run by thread A
void foo()
{
    // Check if value is 'test' and swap in 0xdeadbeef
    while(!CAS(&test, test, 0xdeadbeef)) {}
}

// Run by thread B
void bar()
{
    while(1) {
        // Perpetually atomically write rand() into the test variable
        atomic_write(&test, rand());
    }
}

是吗线程 B 是否有可能永久导致线程 A 的 CAS 失败,从而导致 0xdeadbeef 永远不会写入“测试”?或者自然调度抖动是否意味着在实践中这种情况永远不会发生?如果某些工作是在线程 A 的 while 循环内完成的怎么办?

Imagine a program with two threads. They are running the following code (CAS refers to Compare and Swap):

// Visible to both threads
static int test;

// Run by thread A
void foo()
{
    // Check if value is 'test' and swap in 0xdeadbeef
    while(!CAS(&test, test, 0xdeadbeef)) {}
}

// Run by thread B
void bar()
{
    while(1) {
        // Perpetually atomically write rand() into the test variable
        atomic_write(&test, rand());
    }
}

Is it possible for thread B to perpetually cause thread A's CAS to fail such that 0xdeadbeef is never written to 'test'? Or does natural scheduling jitter mean that in practice this never happens? What if some work was done inside thread A's while loop?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

梦忆晨望 2024-08-27 12:08:45

从理论上讲,是的。如果你能设法让两个线程像这样同步运行,

    time     thread A     thread B
    ----     --------     --------
     ||       CAS
     ||                   atomic_write
     ||       CAS
     \/                   atomic_write

那么 CAS 永远不会返回 true。

实际上,当线程共享 CPU/核心时,这种情况永远不会发生,并且当线程在不同的 CPU 或核心上运行时,这种情况也不太可能发生。实际上,这种情况发生的可能性极小超过几个周期,而且在天文数字上也不太可能发生超过一个调度程序量程。

也就是说,如果这段代码

void foo()
{
    // Check if value is 'test' and swap in 0xdeadbeef
    while(!CAS(&test, test, 0xdeadbeef)) {}
}

执行了它看起来要执行的操作,即获取 test 的当前值,并将其与 test 进行比较以查看它是否已更改。在现实世界中,CAS 的迭代将由实际工作的代码分隔开。需要使用 volatile 关键字来确保编译器在调用 CAS 之前获取测试,而不是假设寄存器中可能仍然存在的副本仍然有效。

或者您要测试的值不是测试的当前值,而是某种最后已知值。

换句话说,此代码示例是对理论的测试,但在实践中您不会像这样使用 CAS,因此即使您可能会失败,它也不一定会告诉您在使用时它如何会失败现实世界的算法。

As a theoretical matter, yes. If you could manage somehow to get the two threads running in lockstep like this

    time     thread A     thread B
    ----     --------     --------
     ||       CAS
     ||                   atomic_write
     ||       CAS
     \/                   atomic_write

Then CAS would never return true.

In practice this will never happen when threads share a CPU/Core, and is unlikely to happen when threads are running on different CPUs or Cores. In practice it's incredibly unlikely to happen for more than few cycles, and astronomically unlikely to happen for more than a scheduler quantum.

And that's if this code

void foo()
{
    // Check if value is 'test' and swap in 0xdeadbeef
    while(!CAS(&test, test, 0xdeadbeef)) {}
}

does what it appears to do, which is to fetch the current value of test, and compare it to test to see if it has changed. In the real world iterations of CAS would be separated by code that did actual work. The volatile keyword would be needed to insure that the compiler fetched test before calling CAS rather than assuming that a copy it might still have in a register would still be valid.

Or the value that you would be testing against wouldn't be the current value of test, but rather some sort of last known value.

In other words, this code example is a test of the theory, but you wouldn't use CAS like this in practice, so even if you could get this to fail, it doesn't necessarily tell you how it could fail when used in real world algorithms.

习ぎ惯性依靠 2024-08-27 12:08:45

在这种情况下肯定会发生饥饿。引用维基百科页面

还表明,
广泛可用的原子条件
原语、CAS 和 LL/SC 不能
提供无饥饿的
许多通用数据的实现
没有内存成本的结构
数量呈线性增长
线程。免等待算法是
因此无论是在研究还是在
在实践中。

(有关数学证明,请参阅相关页面的链接)。

Starvation surely can happen in such cases. Quoting the wikipedia page,

It has also been shown that the
widely-available atomic conditional
primitives, CAS and LL/SC, cannot
provide starvation-free
implementations of many common data
structures without memory costs
growing linearly in the number of
threads. Wait-free algorithms are
therefore rare, both in research and
in practice.

(See the link from the page in question for the mathematical proof).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文