何时修复 MYSQL 中的自增间隙

发布于 2024-08-15 13:54:10 字数 247 浏览 4 评论 0原文

我现在正在处理的数据库有数十万条记录的添加和删除,因此,自动递增的关键数十万个大数字和自动递增的数字远达数十亿之间存在差距。

这些数字永远不会被存储来引用单个记录,而是在进行即时计算时用于引用记录。

是否有任何理由删除这些间隙并重置自动增量数字,或者这是无关紧要的?

id 字段是一个 unsigned int,我应该将其增加为 unsigned big int 吗?据我了解,现在如果达到 4,294,967,295 就会崩溃。

The database I am working on right now has records being added and removed by the 10s of thousands and because of this there are gaps in the auto-incremented key hundreds of thousands big and auto-increment numbers well into the billions.

These numbers are never stored to reference the individual record but are used to reference the record when doing on-the-fly calculations.

Is there any reason to remove these gaps and reset the auto-increment number or is this inconsequential?

The id field is an unsigned int, should I increase it to an unsigned big int? From what I understand, right now if it hits 4,294,967,295 it will break.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

错々过的事 2024-08-22 13:54:10

我担心的唯一原因是您是否发现自己接近 2^32 限制。如果您不使用列作为行 ID,则不必担心。

编辑如果您将此列用于任何类型的标识信息,那么我会将该列切换为 GUID 或其他内容,因为您会溢出,然后您会得到重复的内容价值观。这可不是什么好事。

The only reason I'd worry about it is if you find yourself close to that 2^32 limit. If you're not using the column as a row id, then don't even worry about it.

EDIT If you are using this column for any kind of identifying information, then I'd switch the column over to a GUID or something, because you're gonna get overflow, and then you'll get duplicate values. And that's no bueno.

榆西 2024-08-22 13:54:10

我不知道您的自动增量字段的增长率是多少,但您应该可以通过简单的数学来估计何时会达到 4294967295 限制。

如果您仍然觉得需要做某事,您有以下选择:

  1. 将当前计数重置为 1。最好通过删除该列并重新创建它来完成此操作。由于您没有使用此字段来实现引用完整性,因此应该是一个快速而简单的修复,直到下次...
  2. 将数据类型更改为无符号 BIGINT。现在您可以达到 18446744073709551615。但是您需要堆中更多的空间来存储增加的数据量,而您只是推迟了您的问题。
  3. 从自动增量 (INT / BIGINT) 更改为 UUID 。然后您就可以不再担心数字和无穷大的本质,但您很可能必须更改所有客户端代码。

另外,我感觉到早些时候在这里做出了一两个糟糕的决定。

I don't know what the growth rate of your autoincrement field is, but it should be simple math for you to estimate when you will hit the 4294967295 limit.

If you still feel that you need to do something, you have the following options:

  1. reset the current count to 1. Do this ideally by dropping the column and recreating it. Since you are not using this field for referential integrity, should be a quick and simple fix until the next time...
  2. Change the datatype to an unsigned BIGINT. Now you can go up to 18446744073709551615. But you need more space in the heap to store this increased amount of data, and you have only postponed your problem.
  3. Change from an autoincrement (INT / BIGINT) to a UUID. Then you can stop worrying about numbers and the nature of infinity, but you most likely will have to change all of your client code.

On a separate note, I sense a poor decision or two somewhere earlier up the line here.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文