Redis 的排序集会扩展吗?

发布于 2024-11-28 09:03:49 字数 707 浏览 1 评论 0原文

这可能更多是一个理论问题,但我正在寻找一个务实的答案。

我计划使用 Redis 的排序集 根据计算值在数据库中存储模型的排名。目前我的数据集很小(数据集中有 250 个成员)。我想知道排序集是否会扩展到 5,000 个成员或更多。 Redis 声称最大值为 1GB,而我的值是模型的 ID,因此我并不真正关心排序集值的可扩展性。

ZRANGE 的时间复杂度为 O(log(N)+M)。如果我最频繁地尝试从集合中获取排名前 5 的项目,则 N 个集合项目的 log(N) 可能是一个问题。

我还计划使用 ZINTERSTORE ,其时间复杂度为 O(N*K)+O(M*日志(M))。我计划经常使用 ZINTERSTORE 并使用 ZRANGE 0 -1 检索结果

我想我的问题有两个。

  1. Redis 排序集可以扩展到 5,000 个成员而不会出现问题吗?一万? 5万?
  2. 当应用于大型集合时,ZRANGE 和 ZINTERSTORE(与 ZRANGE 结合)是否会开始出现性能问题?

This may be more of a theoretical question but I'm looking for a pragmatic answer.

I plan to use Redis's Sorted Sets to store the ranking of a model in my database based on a calculated value. Currently my data set is small (250 members in the set). I'm wondering if the sorted sets would scale to say, 5,000 members or larger. Redis claims a 1GB maximum value and my values are the ID of my model so I'm not really concerned about the scalability of the value of the sorted set.

ZRANGE has a time complexity of O(log(N)+M). If I'm most frequently trying to get the top 5 ranked items from the set, log(N) of N set items might be a concern.

I also plan to use ZINTERSTORE which has a time complexity of O(N*K)+O(M*log(M)). I plan to use ZINTERSTORE frequently and retrieve the results using ZRANGE 0 -1

I guess my question is two fold.

  1. Will Redis sorted sets scale to 5,000 members without issues? 10,000? 50,000?
  2. Will ZRANGE and ZINTERSTORE (in conjunction with ZRANGE) begin to show performance issues when applied to a large set?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

大海や 2024-12-05 09:03:49

我对排序集中的数十万个键没有任何问题。当然,获取整个集合需要一段时间,集合越大,但这是预期的 - 即使仅从 I/O 角度来看也是如此。

一个这样的实例是在一台服务器上,该服务器正在使用多个数据库和多个排序集,其中包含 50k 到 > 150k 个键。高写入量是常态,因为这些使用了大量的 zincrby 命令,这些命令通过实时网络服务器日志分析来实现,峰值每天超过 1.5 亿条记录。我一次会存储一周。

根据我的经验,我会说去看看;除非您的服务器硬件非常低端,否则可能没问题。

I have had no issues with hundreds of thousands of keys in sorted sets. Sure getting the entire set will take a while the larger the set is, but that is expected - even from just an I/O Standpoint.

One such instance was on a sever with several DBs in use and several sorted sets with 50k to >150k keys in them. High writes were the norm as these use a lot of zincrby commands coming by way of realtime webserver log analysis peaking at over 150M records per day. And I'd store a week at a time.

Given my experience, I'd say go for it and see; it will likely be fine unless your server hardware is really low end.

戈亓 2024-12-05 09:03:49

在 Redis 中,排序集具有缩放限制。有序集无法分区。因此,如果排序集的大小超过分区的大小,您将无能为力(无需修改 Redis)。

引用文章:

分区粒度是关键,因此不可能像非常大的排序集[1]那样使用单个大键对数据集进行分片。

参考:

[1] http://redis.io/topics/partitioning

In Redis, sorted sets having scaling limitations. A sorted set cannot be partitioned. As a result, if the size of a sorted set exceeds the size of the partition, there is nothing you can do (without modifying Redis).

Quote from article:

The partitioning granularity is the key, so it is not possible to shard a dataset with a single huge key like a very big sorted set[1].

Reference:

[1] http://redis.io/topics/partitioning

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文