使用 AppFabric 缓存的性能问题
我发现当 AppFabric 缓存承受重负载时,会导致不可预测的应用程序行为。
有人经历过类似的事情吗? 对 AppFabric 的理想配置有何想法?
I am finding that when AppFabric cache comes under heavy load it is resulting in unpredictable application behaviour.
Has anyone experienced anything similar?
Any thoughts on ideal configuration for AppFabric?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
看来您对 AppFabric 施加的限制是性能问题的最终原因。也就是说,您可能还想确保已将 channelOpenTimeout 和 requestTimeout 配置为一些合适的值 - 默认值相当高,在许多情况下,最好从数据存储中重新读取数据,而不是等待AppFabric 进行响应。
It seems like the limits you have applied to AppFabric are the ultimate cause of your performance problem. That said, you may also want to ensure that you have configured the channelOpenTimeout and requestTimeout to some suitable values - the defaults are quite high and in many it cases it would be preferable to re-read the data from your data store rather than wait for AppFabric to respond.
可以缓存的对象的最大大小为 8 mb(默认情况下)。如果您的生产应用程序要缓存该大小的对象,您可以通过高级配置属性更改它。
关于另一个问题:如果我们尝试将 150 Mb 数据放入 128 Mb 大小的缓存中,会发生什么情况。
1. 对象将使用尽力而为的 LRU 被逐出,并且更新的对象将替换它们。
2. 如果传入速率快于逐出速率,则缓存可能会受到限制,在一段时间内阻止所有写入。
The maximum size of an object that can be cached by is 8 mb (by default). You can change it through advanced configuration properties if your production application is going to cache objects of that size.
Regarding the other question: What happens if we try to pump 150 Mb data into cache of 128 Mb size.
1. The objects will get evitcted using a best effort LRU and newer objects will replace them.
2. If the incoming rate is faster than the rate of eviction, the cache might be throttled , blocking all writes for some duration.