估计并发 Azure Appfabric 缓存连接
我正在为危机情况做一些规划(上次我们从每天 4k 访问者增加到 130 万),我注意到 Azure AppFabric 缓存的低端有一些相当低的并发连接限制
128Mb 和 128Mb。例如,256Mb 缓存 = 10 个并发连接。
我正在使用 webrole 会话状态缓存 - 但仅在非常有限的情况下将内容放入其中(实时站点上个月已达到 0.03MB 的峰值!)我如何计算出最大连接数 - 这会是吗?相当于我指向的服务器数量吗? CPU 的数量?
我还没有尝试过,但是扩展缓存看起来可能是一个 24 小时的操作???,所以响应不够,无法根据紧急需求进行扩展。
就在一些指导方针帮助我选择和初始缓存大小并合理扩展之后。
I'm doing some planning for crisis situations (last time we went from 4k visitors a day to 1.3M) and I notice that the lower end of the Azure AppFabric cache has some pretty low simultaneous connection limits
128Mb & 256Mb cache =10 concurrent connections for instance.
I'm using cache for webrole session state - but only putting things in it in a very limited set of circumstances (live site has peaked at 0.03MB last month!) How do I figure out the maximum number of connections - is that going to be equivalent to the number of servers I have pointing at it ? The number of CPU's?
I've not tried it yet, but scaling the cache looks like it might be a 24hr operation???, so not responsive enough for scaling up on emergency demand.
Just after some guidelines to help me pick and initial cache size and to scale sensibly.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您拥有的缓存连接数量基本上就是您拥有的 DataCacheFactory 实例的数量。因此,最好的做法是尽可能少地使用这些实例。您还需要确保任何初始化了 DataCacheFactory 的 Azure 实例在停止时应释放该实例,这有助于清除它向服务打开的连接。
但是您说您正在使用缓存作为会话提供程序。这会为角色的每个实例创建自己的 DataCacheFactory。所以基本上每个角色一个连接。话虽如此,会话提供程序在清理不使用的连接方面似乎有点松懈,因此最好过度配置连接数量。
您需要注意的另一件事是缓存的“每小时事务数”限制。如果对页面的每个请求都需要访问会话信息,则这就是您在一小时内可以处理的页面请求数。
调整缓存大小实际上非常快,通常只需要一分钟左右。但每 24 小时只能更改一次。所以如果你增加了负载,你可以把缓存调大,但是如果即使增加了缓存,负载也太大了,那么你需要等待24小时才能再次更改。因此,您最好在第一次调整缓存大小时将缓存设置得更大,并在第二天将其缩小。
编辑:
虽然此信息在撰写本文时是正确的,但 2011 年 11 月 (1.6) 的 SDK 更新引入了缓存连接池,如果您不通过代码配置缓存,该连接池默认处于打开状态。这使得仅拥有一个静态
DataCacheFactory
变得不再那么重要,并且意味着如果您想对会话和应用程序数据使用相同的连接信息,则这一切都可以是一个连接。更多详细信息可以在 MSDN 上找到。The number of connections you have to the cache is basically the number of
DataCacheFactory
instances you have. For this reason it is good practice to have as few instances of these that you can. You also need to be sure that when any Azure instance that has initialised aDataCacheFactory
should dispose of the instance when it stops, this helps clear up the connections that it has open to the service.However you said that you are using the cache as a session provider. This creates its own
DataCacheFactory
for each instance of the role. So basically one connection per role. Having said that the session provider appears to be a little lax with how it cleans up connections it doesn't use, so it's best to over provision the number of connections.The other thing that you need to watch is the "Transactions Per Hour" limit on the cache. If every request to a page needs to access session information, this is how many page requests you can serve in an hour.
Resizing you cache is actually pretty quick, it generally only takes a minute or so. But you can only change once in every 24 hour period. So if you have increased load you can make the cache larger, but if the load is too much for the even the increased cache, then you need to wait for the 24 hours to be up to change it again. So you're probably better off making the cache much larger the first time you resize it and shrink it back down the next day.
EDIT:
While this information was correct at the time of writting, the November 2011 (1.6) update to the SDK has introduced cache connection pooling which is turned on by default if you're not configuring the cache through code. This makes it less crucial to have just one static
DataCacheFactory
and means that if you want to use the same connection information for both session and application data, this could all be one connection. More details can be found on MSDN.