内存缓存的限制
有没有人经历过 memcached 在以下方面的限制:
缓存存储中的对象 - 是否存在性能损失的点?
- 分配的内存量 - 使用的基本数字是什么?
Has anyone experienced memcached limitations in terms of:
of objects in cache store - is there a point where it loses performance?
- Amount of allocated memory - what are the basic numbers to work with?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我可以为您提供一些有关我们环境的指标。 我们在 12 台机器上运行 Win32 的 memcached(作为数据库密集型 ASP.NET 网站的缓存)。 这些盒子都有自己的其他职责; 我们只是将 memcached 节点分布在所有有空闲内存的机器上。 每个节点最多由 memcached 分配 512MB。
我们的节点平均有 500 - 1000 个打开的连接。 典型的节点缓存中有 60,000 个项目,每秒处理 1000 个请求 (!)。 所有这些都运行相当稳定,几乎不需要维护。
我们遇到了两种限制:
1. 客户端计算机上的 CPU 使用情况。 我们使用 .NET 序列化在 memcached 中存储和检索对象。 工作无缝,但 CPU 使用率会随着我们的负载而变得非常高。 我们发现某些对象最好先转换为字符串(或 HTML 片段),然后缓存。
2. 我们遇到了一些memcached 盒子耗尽TCP/IP 连接的问题。 分布在更多的盒子上有帮助。
我们运行 memcached 1.2.6 并使用来自 http://www.codeplex.com/EnyimMemcached/ 的 .NET 客户端
I can give you some metrics for our environment. We run memcached for Win32 on 12 boxes (as cache for a very database heavy ASP.NET web site). These boxes each have their own other responsibilities; we just spread the memcached nodes across all machines with memory to spare. Each node had max 512MB allocated by memcached.
Our nodes have on average 500 - 1000 connections open. A typical node has 60.000 items in cache and handles 1000 requests per second (!). All of this runs fairly stable and requires little maintenance.
We have run into 2 kinds of limitations:
1. CPU use on the client machines. We use .NET serialization to store and retrieve objects in memcached. Works seamless, but CPU use can get very high with our loads. We found that some object can better be first converted to strings (or HTML fragments) and then cached.
2. We have had some problems with memcached boxes running out of TCP/IP connections. Spreading across more boxes helped.
We run memcached 1.2.6 and use the .NET client from http://www.codeplex.com/EnyimMemcached/
我无法保证这一说法的准确性,但在几个月前的一次 Linux/开发者聚会上,一位工程师谈到了他的公司如何将 Memcache 扩展回使用 2GB 块,每个 Memcache 盒 3-4 个块。 他们发现吞吐量很好,但对于非常大的内存缓存守护进程,他们的未命中率增加了 4%。 他说他们无法弄清楚为什么会出现这种差异,但决定只采用有效的方法。
I can't vouch for the accuracy of this claim, but at a linux/developer meetup a few months ago an engineer talked about how his company scaled memcache back to using 2GB chunks, 3-4 per memcache box. They found that throughput was fine, but with very large memcache daemons that they were getting 4% more misses. He said they couldn't figure out why there was a difference but decided to just go with what works.