通过复制和交换分配与两个锁分配

发布于 2024-10-18 06:16:50 字数 1199 浏览 2 评论 0原文

借用 Howard Hinnant 的示例并修改它以使用复制和交换,这个op=线程安全吗?

struct A {
  A() = default;
  A(A const &x);  // Assume implements correct locking and copying.

  A& operator=(A x) {
    std::lock_guard<std::mutex> lock_data (_mut);
    using std::swap;
    swap(_data, x._data);
    return *this;
  }

private:
  mutable std::mutex _mut;
  std::vector<double> _data;
};

我相信这是线程安全的(记住 op= 的参数是按值传递的),我能找到的唯一问题是隐藏的问题:复制因子。然而,允许复制赋值但不允许复制构造的类很少见,因此在两种替代方案中都同样存在该问题。

鉴于自分配非常罕见(至少对于这个例子),如果发生的话我不介意额外的副本,考虑这个 != &rhs 的潜在优化可以忽略不计或者悲观。与最初的策略(如下)相比,还有其他理由选择或避免它吗?

A& operator=(A const &rhs) {
  if (this != &rhs) {
    std::unique_lock<std::mutex> lhs_lock(    _mut, std::defer_lock);
    std::unique_lock<std::mutex> rhs_lock(rhs._mut, std::defer_lock);
    std::lock(lhs_lock, rhs_lock);
    _data = rhs._data;
  }
  return *this;
}

顺便说一句,我认为这至少对于这个类来说简洁地处理了复制构造函数,即使它有点迟钝:

A(A const &x) : _data {(std::lock_guard<std::mutex>(x._mut), x._data)} {}

Borrowing Howard Hinnant's example and modifying it to use copy-and-swap, is this op= thread-safe?

struct A {
  A() = default;
  A(A const &x);  // Assume implements correct locking and copying.

  A& operator=(A x) {
    std::lock_guard<std::mutex> lock_data (_mut);
    using std::swap;
    swap(_data, x._data);
    return *this;
  }

private:
  mutable std::mutex _mut;
  std::vector<double> _data;
};

I believe this thread-safe (remember op='s parameter is passed by value), and the only problem I can find is the one swept under the rug: the copy ctor. However, it would be a rare class that allows copy-assignment but not copy-construction, so that problem exists equally in both alternatives.

Given that self-assignment is so rare (at least for this example) that I don't mind an extra copy if it happens, consider the potential optimization of this != &rhs to be either negligible or a pessimization. Would there be any other reason to prefer or avoid it compared to the original strategy (below)?

A& operator=(A const &rhs) {
  if (this != &rhs) {
    std::unique_lock<std::mutex> lhs_lock(    _mut, std::defer_lock);
    std::unique_lock<std::mutex> rhs_lock(rhs._mut, std::defer_lock);
    std::lock(lhs_lock, rhs_lock);
    _data = rhs._data;
  }
  return *this;
}

Incidentally, I think this succinctly handles the copy ctor, at least for this class, even if it is a bit obtuse:

A(A const &x) : _data {(std::lock_guard<std::mutex>(x._mut), x._data)} {}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

李白 2024-10-25 06:16:50

我相信你的作业是线程安全的(当然假设课堂之外没有参考文献)。它相对于 const A& 变体的性能可能取决于 A。我认为对于许多 A 来说,您的重写速度即使不是更快,也同样快。我拥有的最大的反例是 std::vector (以及类似的类)。

std::vector 具有不参与其值的容量。如果左侧相对于右侧有足够的容量,那么重用该容量,而不是把它扔给临时设备,可以是性能上的胜利。

例如:

std::vector<int> v1(5);
std::vector<int> v2(4);
...
v1 = v2;

在上面的示例中,如果 v1 保留其容量来进行分配,则可以在不进行堆分配或释放的情况下完成分配。但如果向量使用交换习惯用法,那么它会进行一次分配和一次释放。

我注意到,就线程安全而言,两种算法都锁定/解锁两个锁。尽管交换变体避免了同时锁定它们的需要。我相信平均而言,同时锁定两者的成本很小。但在竞争激烈的用例中,它可能会成为一个问题。

I believe your assignment is thread safe (assuming of course no references outside the class). The performance of it relative to the const A& variant probably depends on A. I think for many A that your rewrite will be just as fast if not faster. The big counter-example I have is std::vector (and classes like it).

std::vector has a capacity that does not participate in its value. And if the lhs has sufficient capacity relative to the rhs, then reusing that capacity, instead of throwing it away to a temp, can be a performance win.

For example:

std::vector<int> v1(5);
std::vector<int> v2(4);
...
v1 = v2;

In the above example, if v1 keeps its capacity to do the assignment, then the assignment can be done with no heap allocation or deallocation. But if vector uses the swap idiom, then it does one allocation and one deallocation.

I note that as far as thread safety goes, both algorithms lock/unlock two locks. Though the swap variant avoids the need to lock both of them at the same time. I believe on average the cost to lock both at the same time is small. But in heavily contested use cases it could become a concern.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文