.NET 中的线程安全集合
当需要线程安全集合(例如 Set)时,现在的标准是什么? 我自己同步它,还是有一个本质上线程安全的集合?
What is the standard nowadays when one needs a thread safe collection (e.g. Set).
Do I synchronize it myself, or is there an inherently thread safe collection?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
.NET 4.0 Framework 在系统中引入了多个线程安全集合。 Collections.并发命名空间:
.NET Framework 中的其他集合默认情况下不是线程安全的,需要为每个操作锁定:
The .NET 4.0 Framework introduces several thread-safe collections in the System.Collections.Concurrent Namespace:
Other collections in the .NET Framework are not thread-safe by default and need to be locked for each operation:
在 .net 4.0 之前,.Net 中的大多数集合都不是线程安全的。您必须自己做一些工作来处理同步: http://msdn .microsoft.com/en-us/library/573ths2x.aspx
文章引用:
同步根属性
锁定语句
在 .net 4.0 中引入了 System.Collections.Concurrent 命名空间
阻止收集
并发包
并发队列
并发词典
可订购分区程序
分区器< br>
分区程序 T
Pre .net 4.0 most collections in .Net are not thread safe. You'll have to do some work yourself to handle the synchronization: http://msdn.microsoft.com/en-us/library/573ths2x.aspx
Quote from article:
Sync Root Property
Lock Statement
In .net 4.0 the introduced the System.Collections.Concurrent namespace
Blocking Collection
Concurrent Bag
Concurrent Queue
Concurrent Dictionary
Ordable Partitioner
Partitioner
Partitioner T
.NET 4在System.Collections.Concurrent下提供了一组线程安全的集合
.NET 4 provides a set of thread-safe collections under System.Collections.Concurrent
除了 System.Collections.Concurrent 中非常有用的类之外,在大多数读取很少更改的场景(或者如果存在频繁但非并发的写入)中,一种标准技术是也适用于.Net,称为Copy-on-write< /a>.
它有几个在高并发程序中需要的属性:
:如果有并发写入时,可能需要重试修改,因此并发写入越多,效率就越低。 (这就是工作中的乐观并发)
编辑 Scott Chamberlain 的评论提醒了我还有另一个限制:如果您的数据结构很大,并且经常发生修改,则在内存消耗和所涉及的复制的 CPU 成本方面,写时复制可能会令人望而却步。
In a addition to the very useful classes in
System.Collections.Concurrent
, one standard technique in mostly-read-rarely-change scenarios (or if there are however frequent, but non-concurrent writes) that is also applicable to .Net is called Copy-on-write.It has a couple of properties that are desirable in highly-concurrent programs:
Limitation: If there are concurrent writes, modifications may have to be retried, so the more concurrent writes get, the less efficient it becomes. (That's optimistic concurrency at work)
Edit Scott Chamberlain's comment reminded me that there's another limitation: If your data structures are huge, and modifications occur often, a copy-all-on-write might be prohibitive both in terms of memory consumption and the CPU cost of copying involved.