如果 2^32 还不够怎么办?

发布于 2024-07-15 22:31:52 字数 533 浏览 4 评论 0原文

如果表中有太多条目,2^32 不足以满足给定时间段(日、周、月……)内的 auto_increment ID 怎么办?
如果MySQL提供的最大数据类型不够用怎么办?

我想知道我应该如何解决这样一种情况:我的表中添加了很多需要唯一 ID 的条目,但我在一个时期内填满了我的数据类型?

我如何在 MySQL(或任何其他系统)中本地实现无限数量的唯一 ID 或至少以指数方式增加它?

理想情况下,我期望类似的东西

> SELECT * FROM table;

+---+------+
| a |  b   |
+---+------+
| 1 |  1   |
| 1 |  2   |
| 1 |  3   |
|...| .... |
|...| .... |
| 1 | 2^32 |
| 2 |  1   |
| 2 |  2   |
+---+------+

会呈指数级增加条目数量。

遇到这样的情况你会如何应对?
请记住 - 要求是任何条目都有唯一的 ID。

what if you have so many entries in a table, that 2^32 is not enough for your auto_increment ID within a given period (day, week, month, ...)?
What if the largest datatype MySQL provides is not enough?

I'm wondering how should I solve a situation where I'm having so many entries added to my table which require unique ID, but I fill up my datatype within one period?

How could I natively within, MySQL (or any other system), achieve unlimited amount of unique IDs or at least increase it exponentially?

Ideally I would expect something like

> SELECT * FROM table;

+---+------+
| a |  b   |
+---+------+
| 1 |  1   |
| 1 |  2   |
| 1 |  3   |
|...| .... |
|...| .... |
| 1 | 2^32 |
| 2 |  1   |
| 2 |  2   |
+---+------+

Which exponentially increases the amount of entries.

How do you cope with such situations?
Remember - requirement is to have unique ID for any entry.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(10

掐死时间 2024-07-22 22:31:52

您不认为 BIGINT UNSIGNED 就足够了吗? 范围为 0 - 18.446.744.073.709.551.615,即一年每天有 50.539.024.859.478.223 个条目(365 天/年)、每小时 2.105.792.702.478.259 个条目、35.096.545.041.304 个条目每分钟或每秒 584.942.417.355。

假设为 600每秒写入(没有任何读取) 您可以以全速写入 974.904.028 年的条目。 那应该足够了。

Don't you think a BIGINT UNSIGNED would be sufficient? That's a range of 0 - 18.446.744.073.709.551.615, or one year with 50.539.024.859.478.223 entries per day (365 d/y), 2.105.792.702.478.259 entries per hour, 35.096.545.041.304 entries per minute or 584.942.417.355 per second.

With assumed 600 writes per second (without any reads) you could write entries 974.904.028 years at full write speed. That should be enough.

忆依然 2024-07-22 22:31:52

您可以使用 BIGINT 作为主键。 默认情况下,这是一个 64 位数字。

编辑#2:显然我之前所说的关于改变 BIGINT 字节长度的内容是不正确的。 BIGINT固定为 8 字节限制。

You could use BIGINT for the primary key. This is a 64-bit number by default.

Edit #2: Apparently what I said before about varying the BIGINT byte length was incorrect. BIGINT is fixed at an 8-byte limit.

君勿笑 2024-07-22 22:31:52

只需使用 128 位密钥。 不需要无限数量的键,因为您很快就会允许比宇宙中原子数量更多的行。 (大约 256 位)。

Just use 128-bit keys. There is no need for an unlimited number of keys, since you very quickly allow more rows than the number of atoms in the universe. (somewhere around 256 bits).

回忆追雨的时光 2024-07-22 22:31:52

如果您有如此多的数据而遇到此问题,那么选择主键可能是您最不关心的问题。

如果您使用 InnoDB 引擎,选择您经常搜索的主键(尤其是在搜索返回许多行的情况下)可能会对性能有所帮助,因为它将主键聚集在一起,这使得范围扫描更好。

If you have so much data that you encounter this problem, then choosing a primary key is probably the least of your concerns.

If you're using the InnoDB engine, it may be helpful for performance to choose a primary key that you'll be frequently searching on (especially where the searches return many rows), as it clusters the primary key, which makes range scans better.

话少情深 2024-07-22 22:31:52

我首先会转向 2^64 的 BIGINT。 GUID 是另一种选择,但您需要自己以“某种形式”存储它们

I'd start by moving to BIGINT for 2^64. GUIDs would be another option, but you need to store these yourself in "some form"

恬淡成诗 2024-07-22 22:31:52

不要使用自动增量主键 - 使用 GUID 或类似内容 - 来自维基百科文章:

虽然每个生成的 GUID 不是
保证是唯一的,总数
唯一键的数量(2^128 或
3.4×10^38) 如此之大以至于相同数字的概率
生成两次是无限小的
小的。 例如,考虑
可观测宇宙,其中包含
约5×1022颗星; 每个明星都可以
则有 6.8×1015 普遍唯一
GUID。

Don't use an autoincrementing primary key - use a GUID or similar - from the Wikipedia article:

While each generated GUID is not
guaranteed to be unique, the total
number of unique keys (2^128 or
3.4×10^38) is so large that the probability of the same number being
generated twice is infinitesimally
small. For example, consider the
observable universe, which contains
about 5×1022 stars; every star could
then have 6.8×1015 universally unique
GUIDs.

傾旎 2024-07-22 22:31:52

当您向键添加另一列时,您需要执行的索引扫描数量实际上会增加一倍(尽管第二列的索引要小得多)。

如前所述,VAST 数据集的最佳选择是 GUID(如果您的 RDBMS 本身支持)或 varchar(16)。

使用 varchar / varbinary 的好处是,如果需要,您可以在将来自动扩展该列。 不好的部分是,与整数相比,varchar / varbinary 是一个性能较差的键。

When you add another column to your key, you are effectively doubling the number of index scans you will need to perform (albeit on a much smaller index for the second column).

As stated earlier, your best bet for VAST data sets is either a GUID (if your RDBMS supports it natively) or a varchar(16).

The nice part about using a varchar / varbinary is that you could automatically expand the column in the future, if needed. And the bad part is that varchar / varbinary is a poorly performing key, compared to an integer.

ζ澈沫 2024-07-22 22:31:52

我不确定如何在 MySQL 中自动生成它们,然后它们不一定是连续的,但我很确定您可以使用 GUID 而不必担心它们被填满。

I'm not sure how to generate them automatically in MySQL, and then, they wouldn't necessarily be sequential, but I'm pretty sure that you could use a GUID and not have to worry about them filling up.

三生池水覆流年 2024-07-22 22:31:52

您还可以使用 chars/varchars 作为键列,并使用 GUID 作为键。 我不知道与整数主键相比这是否会导致性能损失。

You could also use chars/varchars for your key columns, and use GUIDs for your keys. I don't know if that would incur a performance penalty when compared to integer primary keys though.

裸钻 2024-07-22 22:31:52

如果 BIGINT 不足以满足您的需要,请在表中使用它,当条目数量达到 BIGINT 边界时,创建另一个表并从 0 重新开始。现在您将有 2 个表来存储相同类型的数据。

If BIGINT is not sufficient for you, use it in your table and when the amount of entries reach the BIGINT border, create another table and start off again from 0. Now you will have 2 tables to store the same type of data.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文