防止与 memcached 的连接过多(Enyim 客户端)

发布于 2024-11-04 09:03:23 字数 668 浏览 0 评论 0原文

我正在寻找有关处理打开 memcached 连接的有效解决方案的建议,给出了常见问题解答引用:

记住没有什么可以阻止你 不小心连接了很多次。如果 您实例化一个 memcached 客户端 对象作为你所在对象的一部分 尝试存储,不要感到惊讶 当一个请求中有 1,000 个对象时 创建 1,000 个并行连接。 仔细寻找类似的错误 在跳入列表之前。

另请参阅:初始化 Memcached 客户端管理连接对象

我考虑在缓存程序集中使用单例来提供 memcached 客户端,但我确信必须有更好的方法,因为锁会引入(不需要的?)开销。

我清楚客户端的使用模式,但我不清楚如何在可扩展性和性能方面有效地使用客户端。其他人如何处理使用 memcached 客户端?

里面有50块赏金给你。

I'm looking for suggestions for an efficient solution for dealing with opening memcached connections given the FAQ quote:

Remember nothing is stopping you from
accidentally connecting many times. If
you instantiate a memcached client
object as part of the object you're
trying to store, don't be surprised
when 1,000 objects in one request
create 1,000 parallel connections.
Look carefully for bugs like this
before hopping on the list.

See also: Initializing a Memcached Client and Managing Connection Objects.

I considered using a singleton in our caching assembly to provide the memcached client, though I'm sure there must be better methods as the locks would introduce (unneeded?) overhead.

I am clear on the patterns for use of the client, what I'm not clear on is how to use the client efficiently with regards to scalability and performance. How do other people deal with using the memcached clients?

There's a bounty of 50 in it for you.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

为你鎻心 2024-11-11 09:03:23

我们的 Redis 客户端也有类似的场景,最初我们的解决方案是拥有一个通过lock 同步访问的公共单个实例。这很好,但为了避免延迟和阻塞,我们最终编写了一个线程安全的管道客户端,它允许并发使用而没有任何阻塞。我对男人痛苦的协议不太了解,但我想知道类似的东西是否适用于此。如果您可以稍等一下,我实际上很想尝试调查一下是否可以将其添加到 BookSleeve(我们的自定义 OSS redis 客户端)中。

但我们通常能够仅使用同步共享实例(与单例几乎相同,具体取决于您的纯粹主义程度)。


浏览一下常见问题解答,管道确实是一种可能性;我完全愿意选择在 booksleeve 中编写异步/流水线 Memcached 客户端。大多数原始 IO/多路复用在 Redis 中非常常见。您可以考虑的其他技巧是使用 get_multi 等,而不是尽可能单独获取 - 不过,我不知道您当前的客户端是否支持此操作(IK 还没有看过)。

但是:我不知道它如何将 memcached 与 redis 进行对比,但是在我们的例子中,切换到管道/多路复用 API 意味着我们不需要使用太多池化(许多连接)- 单个连接(正确管道化)能够支持单个节点的大量并发使用。

We had a similar scenario with a redis client, and originally our solution was to have a common single instance that we synchronised access to via lock. This was fine, but to avoid the latency and blocking we eventually wrote a thread-safe pipelined client, which allows concurrent use without any blocking. I don't know as much about the men ached protocol, but I wonder if something similar could apply here. I'm actualy tempted to try investigating to see if I could add this to BookSleeve (our custom OSS redis client) if you can wait a little while.

But we were generally able to keep up just using a synchronised shared instance (pretty much the same thing as a singleton, depending on how purist you are).


Glancing at the FAQ, pipeline is indeed a possibility; and I'm entirely open to the option of writing an async/pipelined memcached client inside booksleeve. Most of the raw IO / multiplexing would be pretty common with redis. The other tricks you can consider is using get_multi etc rather than separate gets where possible - I don't know whether your current client supports this, though (IK haven't looked).

But: I don't know how it contrasts memcached to redis, but in our case, switching to a pipelined/multiplexed API meant we didn't need to use much pooling (many connections) - a single connection (properly pipelined) is capable of supporting lots of concurrent usage from a single node.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文