多少次对 HINCRBY 的调用是合理的?
我正在尝试重新发明轮子并将一些统计数据存储在 Redis 中。
我正在考虑急切地聚合,并在每个新事件之后立即增加所有相关计数器(每秒可能发生几次)。
每个事件需要调用 HINCRBY
5-50 次,我的目标是首先每秒处理 5-100 个事件。
对于Redis来说是不是太多了? 如果是,我是否应该设定一些较低的限制(每次活动 10 次?只有一次?)? 如果不是,它是否可以扩展这些参数中的任何一个(我对扩展至 1000 个事件更感兴趣?10000 个?)?
显然我还必须收集垃圾。我计划通过为每个事件所需的每个哈希调用 EXPIRE
来实现此目的(不超过 2-5 次,因为某些计数器位于同一哈希中)。可以吗?
I'm trying to reinvent the wheel and store some stats in Redis.
I'm thinking about aggregating eagerly, and incrementing all related counters right after every new event (it can happen several times per second).
It will require to call HINCRBY
like 5-50 times per event, and I'm aiming to 5-100 events per second at first.
Is it too much for Redis?
If it is, should I aim for some lower limits (10 times per event? only one?)?
If it's not, can it scale in any of these parameters (I'm more interested in scaling to 1000 events? 10000?)?
I'll obviously also have to collect garbage. I'm planning on doing it by calling EXPIRE
for every hash needed on every event (no more than 2-5 times, since some of counters are in the same hash). Is it ok?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
发疯吧。如果硬件支持,Redis 将能够处理负载。显然,您应该尽快进行原型设计并尝试,但这绝对是 Redis 应该能够处理的事情。
不过,我建议您已经考虑扩大规模了。提前解决扩展问题比等到它成为问题时要容易得多。 Redis(还)没有任何集群解决方案,并且您受到 RAM(和一个 CPU)的限制,因此最终您将需要某种方式扩展到更多服务器。
做到这一点的方法是客户端分片,即对于每个操作,您散列密钥并查看它位于哪台服务器上,然后与该服务器通信(这显然使得使用多个密钥的操作很难执行,因此您可以必须围绕这一点进行设计)。 Ruby 客户端有一些开箱即用的支持,但如果您使用其他驱动程序,那么自己动手并不难(尽管不方便)(Salvatore 也有一个指南)。
我建议从在同一台机器上运行两个或四个 Redis 实例开始(每个 CPU 一个,或类似的东西),再加上另一台运行从属设备以实现冗余和故障转移的机器(您还可以在每台服务器上运行两个主服务器和两个从属服务器)。这样,如果您需要扩展,将实例移动到其他服务器就不需要太多工作。如果您有四个实例,您将能够轻松地移动到四台机器,因为您所要做的就是在新机器上设置从属机器,等待它同步,然后将其用作主机器。如果您没有四个实例可供启动,则迁移到新计算机意味着手动移动密钥,这可能会是一项繁重的工作。
Go nuts. If the hardware is up for it, Redis will be able to handle the load. Obviously you should prototype and try it out as soon as possible, but this is definitely something that Redis should be able to handle.
I suggest you think about scaling already, though. It's much easier to solve the scaling problem up front than waiting for when it becomes an issue. Redis does not (yet) have any clustering solution, and you are limited by RAM (and one CPU), so eventually you will need some way of scaling out to more servers.
The way to do it is client side sharding, i.e. for each operation you hash your key and see which server it lives on, then talk to that server (this obviously makes operations that use more than one key very hard to perform, so you may have to design around that). The Ruby client has some support out of the box, but it isn't hard (although inconvenient) to do yourself if you're using another driver (and Salvatore has a guide, too).
I suggest starting with two or four Redis instances running on the same machine (one per CPU, or something like that), plus another machine running slaves for redundancy and failover (you can also run two masters and two slaves on each server). This way it's not too much work moving instances to other servers if you need to grow. If you have four instances you will be able to move to four machines without much trouble, since all you have to do is set up a slave on the new machine, wait for it to sync and then use that as master. If you don't have four instances to start with, moving to a new machine means manually moving keys, that can be a lot of work.