Redis 的排序集会扩展吗?
这可能更多是一个理论问题,但我正在寻找一个务实的答案。
我计划使用 Redis 的排序集 根据计算值在数据库中存储模型的排名。目前我的数据集很小(数据集中有 250 个成员)。我想知道排序集是否会扩展到 5,000 个成员或更多。 Redis 声称最大值为 1GB,而我的值是模型的 ID,因此我并不真正关心排序集值的可扩展性。
ZRANGE 的时间复杂度为 O(log(N)+M)。如果我最频繁地尝试从集合中获取排名前 5 的项目,则 N 个集合项目的 log(N) 可能是一个问题。
我还计划使用 ZINTERSTORE ,其时间复杂度为 O(N*K)+O(M*日志(M))。我计划经常使用 ZINTERSTORE 并使用 ZRANGE 0 -1 检索结果
我想我的问题有两个。
- Redis 排序集可以扩展到 5,000 个成员而不会出现问题吗?一万? 5万?
- 当应用于大型集合时,ZRANGE 和 ZINTERSTORE(与 ZRANGE 结合)是否会开始出现性能问题?
This may be more of a theoretical question but I'm looking for a pragmatic answer.
I plan to use Redis's Sorted Sets to store the ranking of a model in my database based on a calculated value. Currently my data set is small (250 members in the set). I'm wondering if the sorted sets would scale to say, 5,000 members or larger. Redis claims a 1GB maximum value and my values are the ID of my model so I'm not really concerned about the scalability of the value of the sorted set.
ZRANGE has a time complexity of O(log(N)+M). If I'm most frequently trying to get the top 5 ranked items from the set, log(N) of N set items might be a concern.
I also plan to use ZINTERSTORE which has a time complexity of O(N*K)+O(M*log(M)). I plan to use ZINTERSTORE frequently and retrieve the results using ZRANGE 0 -1
I guess my question is two fold.
- Will Redis sorted sets scale to 5,000 members without issues? 10,000? 50,000?
- Will ZRANGE and ZINTERSTORE (in conjunction with ZRANGE) begin to show performance issues when applied to a large set?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我对排序集中的数十万个键没有任何问题。当然,获取整个集合需要一段时间,集合越大,但这是预期的 - 即使仅从 I/O 角度来看也是如此。
一个这样的实例是在一台服务器上,该服务器正在使用多个数据库和多个排序集,其中包含 50k 到 > 150k 个键。高写入量是常态,因为这些使用了大量的 zincrby 命令,这些命令通过实时网络服务器日志分析来实现,峰值每天超过 1.5 亿条记录。我一次会存储一周。
根据我的经验,我会说去看看;除非您的服务器硬件非常低端,否则可能没问题。
I have had no issues with hundreds of thousands of keys in sorted sets. Sure getting the entire set will take a while the larger the set is, but that is expected - even from just an I/O Standpoint.
One such instance was on a sever with several DBs in use and several sorted sets with 50k to >150k keys in them. High writes were the norm as these use a lot of zincrby commands coming by way of realtime webserver log analysis peaking at over 150M records per day. And I'd store a week at a time.
Given my experience, I'd say go for it and see; it will likely be fine unless your server hardware is really low end.
在 Redis 中,排序集具有缩放限制。有序集无法分区。因此,如果排序集的大小超过分区的大小,您将无能为力(无需修改 Redis)。
引用文章:
参考:
[1] http://redis.io/topics/partitioning
In Redis, sorted sets having scaling limitations. A sorted set cannot be partitioned. As a result, if the size of a sorted set exceeds the size of the partition, there is nothing you can do (without modifying Redis).
Quote from article:
Reference:
[1] http://redis.io/topics/partitioning