STD ::障碍为什么分配?
为什么 std :: barrier
在heap while while std :: latch
不是吗?
它们之间的主要区别是std :: barrier
可以重复使用std :: latch
不能,但是我找不到有关为什么这会做出的解释前者分配记忆。
Why does std::barrier
allocate memory on the heap while std::latch
doesn't?
The main difference between them is that std::barrier
can be reused while std::latch
can't, but I can't find an explanation on why this would make the former allocate memory.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
虽然确实可以用持续数量的存储来实现幼稚的障碍,这是
std ::屏障
对象的一部分,但现实世界中的屏障实现使用了具有&gt的结构; o(1)存储,但并发属性更好。天真的,恒定的存储屏障可能会遭受大量的线程在同一个方面竞争,计数器。这样的争论可能会导致O(n)运行时间从屏障中释放线程。作为“更好”实现的一个示例, gcc libstdc ++实现使用
不太困难地想象增强的树型的屏障实现,这些实现意识到缓存/套接字/内存总线层次结构以及基于树木中的小组线程,以最大程度地减少跨核,交叉die和跨插槽轮询至最低要求。
另一方面,闩锁是一种更轻巧的同步工具。但是,我不太确定为什么锁存被禁止分配 - cppreference 说明,
,它表明它不应分配(即它只是一个计数器,并且没有空间可以将指针保存到分配的对象)。另一方面, [thread.latch] 在标准中,无非就是“闩锁维护一个内部计数器,该计数器是在创建闩锁时初始化的“而无需禁止其分配的。
While it is true that a naive barrier could be implemented with a constant amount of storage as part of the
std::barrier
object, real-world barrier implementations use structures that have > O(1) storage, but better concurrency properties. The naive, constant-storage barrier can suffer from a large number of threads contending on the same counter. Such contention can lead to O(N) runtime to release threads from the barrier.As an example of a "better" implementation, the gcc libstdc++ implementation uses a tree barrier. This avoids the contention on a single counter/mutex shared among all threads, and a tree barrier can propagate the "barrier done, time to release" signal in logarithmic time, at the expense of needing linear space to represent the thread tree.
It's not too difficult to imagine enhanced tree-style barrier implementations that are aware of cache/socket/memory bus hierarchy, and group threads in the tree based on their physical location to minimize cross-core, cross-die, and cross-socket polling to the minimum required.
On the other hand, a latch is a much more lightweight synchronization tool. However, I'm not quite sure why a latch would be forbidden to allocate - cppreference states,
which would indicate that it should not allocate (i.e. it's just a counter and has no space to hold a pointer to an allocated object). On other hand, [thread.latch] in the standard says nothing more than "A latch maintains an internal counter that is initialized when the latch is created" without forbidding it from allocating.