不可变对象是好的实践吗?
我应该尽可能让我的类不可变吗?
我曾经读过 Joshua Bloch 所著的《Effective Java》一书,他建议出于各种原因使所有业务对象不可变。 (例如线程安全) 这也适用于 C# 吗?
您是否尝试使对象不可变,以便在使用它们时减少问题? 或者说不值得为它们带来的不便?
Should I make my classes immutable where possible?
I once read the book "Effective Java" by Joshua Bloch and he recommended to make all business objects immutable for various reasons. (for example thread safety)
Does this apply for C# too?
Do you try to make your objects immutable, so you have less problems when working with them?
Or is it not worth the inconvenience you have to create them?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
这将更多地是一个意见类型的答案,但是......
我发现理解程序的难易程度,即维护和调试所述应用程序,与每个组件处理期间发生的状态转换的数量成反比。我需要在头脑中思考的状态越少,我就越能关注算法编写时的逻辑。
This is going to be more of an opinion type answer but...
I find that the ease of understanding a program, i.e. maintaining and debugging said application, is inversly proportional to the amount of stateful transitions that occur during the processing of each component. The less state I need to cart around in my head, the more focus I can pay attention to the logic within the algorithms as it is written.
不可变对象是函数式编程的核心特征;它有自己的优点和缺点。 (例如,链表实际上不可能是不可变的,但是不可变的对象使并行性变得小菜一碟。)因此,正如对您的帖子的评论所指出的,答案是“这取决于”。
Immutable objects are the central feature of functional programming; it has its own advantages and disadvantages. (E.g. linked lists are practically impossible to be immutable, but immutable objects make parallelism a piece of cake.) So as a comment on your post noted, the answer is "it depends".
在我的脑海中,我想不出不可变对象使线程安全代码在某种程度上“更好”的原因。
如果我希望一个对象是线程安全的,我要么在它周围加一个锁,要么制作它的副本并在完成处理后更新引用。我通常不希望每一个小小的改变都需要一个新的对象。
对我来说,不可变的字符串给线程带来的麻烦远多于它的帮助。
实际上,我不遗余力地使用不安全的代码而不是内置的 String.ToUpper() 来制作“就地”“ToUpper”。它的运行速度大约是原来的 4 倍,并且消耗的内存是峰值的 1/2。
Off the top of my head, I can't think of a reason for immutable objects making thread safe code somehow "better".
If I want an object to be thread safe, I will either put a lock around it or I will make a copy of it and update the reference once I'm done working on it. I typically wouldn't want a new object for every little change.
For me, immutable strings create more headaches for threading than it helps.
I actually went out of my way to make an "in-place" "ToUpper" using unsafe code isntead of the built in String.ToUpper(). It runs about 4 times faster and consumes 1/2 the peak memory.
不可变结构的另一个好处是,您可以在本地缓存它们的实例并在多个线程中重用它们,而不必担心意外行为,就像它们是可变的情况一样。
例如,假设您正在使用外部缓存服务,例如 memcached 或 Velocity 或其他一些同样简单的分布式哈希表服务。您可以只使用 C# 客户端库并称其足够好。然而,在 Web 请求场景等短暂的上下文中,这会浪费资源。您真正想要的是在您的上下文中从缓存中提取每个对象一次且仅一次。
完成这项工作的最安全方法是在进程中的缓存提供程序前面放置一个本地哈希表。在第一次请求缓存键时,您将拉下表示您希望使用的对象的序列化字节流,并将该字节流存储在本地哈希表中。在对相同缓存键的后续请求中,只需在本地哈希表中查找字节流并将对象反序列化为每个请求的新实例。这是为了防止对缓存服务器节点进行多次冗余访问,以获取假定在上下文的生命周期内未更改的相同信息。
使用不可变结构,您可以在第一个请求时仅反序列化字节流一次,并将反序列化的实例存储在哈希表而不是字节流中,并且只共享对象的一个不可变实例。这显然减少了反序列化的惩罚,如果您的消费代码以这样的方式编写,它不关心对缓存提供程序进行了多少次调用,假设缓存比查询底层数据存储更快,那么反序列化惩罚会很快增加。
也许这更多的是一个主观的答案,但这是一个可以通过使用不可变结构来唯一解决的具体问题,所以我认为它与分享相关。
Another nice benefit of immutable structures is that you can locally cache instances of them and reuse them across multiple threads without fear of unexpected behaviors as would be the case if they were mutable.
For instance, suppose you are using an external caching service such as memcached or Velocity or some other equally simplistic distributed hashtable service. You could just use the C# client library and call it good enough. However, that is being wasteful with resources given a short-lived context like a web request scenario. What you really want is to pull each object from the cache once and only once in your context.
The safest way to get this job done is to place a local hashtable in your process in front of the cache provider. On the first request for the cache key you'd pull down the serialized byte stream that represents the object you wish to use and store that byte stream in your local hashtable. On subsequent requests for the same cache key, just look up the byte stream in the local hashtable and deserialize the object to a new instance for each request. This is to prevent multiple redundant trips to the cache server node for the same information that assumedly has not changed over the lifetime of your context.
With immutable structures, you could deserialize the byte stream only once on the first request and get away with storing the deserialized instance in the hashtable instead of the byte stream and just share that one single immutable instance of your object. This obviously cuts down on deserialization penalties which can add up rather quickly if your consuming code is written in such a fashion that it does not care how many calls it makes to the caching provider, assuming the cache is faster than querying your underlying data store.
Perhaps this is more of a subjective answer, but it's a specific problem that can be solved uniquely by using immutable structures so I thought it was relevant to share.
不变的埃里克·利珀特 (Eric Lippert) 就该主题撰写了一系列博客文章。第一部分是 这里。
引用 之前的帖子,他链接到:
The immutable Eric Lippert has written a whole series of blog posts on the topic. Part one is here.
Quoting from the earlier post that he links to: