std::memory_order_relaxed 与 fetch_add
我正在尝试更深入地了解宽松的内存排序。根据 CPP 参考,没有同步,但原子性仍然得到保证。在这种情况下原子性不需要某种形式的同步,例如下面的 fetch_add()
如何保证只有一个线程将值从 y
更新为 y +1,特别是如果写入对不同线程来说可能是无序可见的?是否存在与 fetch_add 关联的隐式同步?
memory_order_relaxed 宽松操作:对其他读取或写入没有同步或排序约束,仅保证此操作的原子性(请参阅下面的宽松排序)
#include <thread>
#include <iostream>
#include <atomic>
#include <vector>
#include <cassert>
using namespace std;
static uint64_t incr = 100000000LL;
atomic<uint64_t> x;
void g()
{
for (long int i = 0; i < incr; ++i)
{
x.fetch_add(1, std::memory_order_relaxed);
}
}
int main()
{
int Nthreads = 4;
vector<thread> vec;
vec.reserve(Nthreads);
for (auto idx = 0; idx < Nthreads; ++idx)
vec.push_back(thread(g));
for(auto &el : vec)
el.join();
// Does not trigger
assert(x.load() == incr * Nthreads);
}
I'm trying gain a deeper understanding of relaxed memory ordering. Per CPP reference, there is no synchronization, however atomicity is still guaranteed. Doesn't atomicity in this case require some form of sync, e.g. how does fetch_add()
below guarantee that only one thread will update the value from y
to y+1
, particularly if writes can be visible out of order to different threads? Is there an implicit sync associated with fetch_add
?
memory_order_relaxed Relaxed operation: there are no synchronization or ordering constraints imposed on other reads or writes, only this operation's atomicity is guaranteed (see Relaxed ordering below)
#include <thread>
#include <iostream>
#include <atomic>
#include <vector>
#include <cassert>
using namespace std;
static uint64_t incr = 100000000LL;
atomic<uint64_t> x;
void g()
{
for (long int i = 0; i < incr; ++i)
{
x.fetch_add(1, std::memory_order_relaxed);
}
}
int main()
{
int Nthreads = 4;
vector<thread> vec;
vec.reserve(Nthreads);
for (auto idx = 0; idx < Nthreads; ++idx)
vec.push_back(thread(g));
for(auto &el : vec)
el.join();
// Does not trigger
assert(x.load() == incr * Nthreads);
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
“同步”在 C++ 中具有非常具体的含义。
其指以下。假设:
线程 A 读取/写入内存 X。(不必是原子的)
线程 A 写入原子变量 Y。(必须是
release
或seq_cst
写)线程 B 读取变量 Y,并查看 A 先前写入的值。 (必须是
获取
或seq_cst
读取)此时,操作 (2) 和 (3) 被认为是彼此同步。
线程 B 读取/写入内存 X。(不必是原子的)
通常这会导致与线程 A 的数据争用(未定义的行为),但由于同步,不会出现这种情况。
这只适用于
release
/acquire
/seq_cst
操作,不适用于relaxed
操作。这就是这句话的意思。"Synchronization" has a very specific meaning in C++.
It refers to following. Let's say:
Thread A reads/writes to memory X. (doesn't have to be atomic)
Thread A writes to atomic variable Y. (must be a
release
orseq_cst
write)Thread B reads the variable Y, and sees the value previously written by A. (must be an
acquire
orseq_cst
read)At this point, operations (2) and (3) are said to synchronize with each other.
Thread B reads/writes to memory X. (doesn't have to be atomic)
Normally this would cause a data race with thread A (undefined behavior), but it doesn't because of the synchronization.
This only works with
release
/acquire
/seq_cst
operations, and notrelaxed
operations. That's what the quote means.