计算 auto_increment 与其最大值有多接近的简单方法?

发布于 2024-08-31 03:03:32 字数 310 浏览 1 评论 0原文

所以昨天我们有一个表,其中有一个达到最大值的smallint 的auto_increment PK。我们不得不紧急改变桌子,这绝对不是我们喜欢的方式。

有没有一种简单的方法来报告我们使用的每个 auto_increment 字段与其最大值的接近程度?我能想到的最好方法是执行 SHOW CREATE TABLE 语句,解析出自动增量列的大小,然后将其与表的 AUTO_INCRMENT 值进行比较。

另一方面,鉴于架构不会经常更改,我是否应该存储有关列最大值的信息并使用 SHOW TABLE STATUS 获取当前的 AUTO_INCRMENT?

So yesterday we had a table that has an auto_increment PK for a smallint that reached its maximum. We had to alter the table on an emergency basis, which is definitely not how we like to roll.

Is there an easy way to report on how close each auto_increment field that we use is to its maximum? The best way I can think of is to do a SHOW CREATE TABLE statement, parse out the size of the auto-incremented column, then compare that to the AUTO_INCREMENT value for the table.

On the other hand, given that the schema doesn't change very often, should I store information about the columns' maximum values and get the current AUTO_INCREMENT with SHOW TABLE STATUS?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

奢望 2024-09-07 03:03:32

你的问题对我来说似乎完全合理。您应该能够从 information_schema 获取每个表的当前自动增量值。我不认为各种 int 类型的最大值可以作为 MySQL 中的常量,但 Roland Bouman 演示了一种在 MySQL 中生成它们的简单方法:

在 SQL 中,如何获取整数的最大值?

如果将该数据放入一个表,然后您可以编写一个 SQL 查询来获取所有表的当前自动增量状态,以便您可以了解距离用完值有多近。

这是一个快速入门的示例:

create temporary table max_int_values
(
int_type varchar(10) not null,
extra varchar(8) not null default '',
max_value bigint unsigned not null,
primary key (int_type,max_value),
key int_type (int_type),
key max_value (max_value)
);

insert into max_int_values(int_type,extra,max_value) values ('tinyint','',~0 >> 57);
insert into max_int_values(int_type,extra,max_value) values ('tinyint','unsigned',~0 >> 56);
insert into max_int_values(int_type,extra,max_value) values ('smallint','',~0 >> 49);
insert into max_int_values(int_type,extra,max_value) values ('smallint','unsigned',~0 >> 48);
insert into max_int_values(int_type,extra,max_value) values ('mediumint','',~0 >> 41);
insert into max_int_values(int_type,extra,max_value) values ('mediumint','unsigned',~0 >> 40);
insert into max_int_values(int_type,extra,max_value) values ('int','',~0 >> 33);
insert into max_int_values(int_type,extra,max_value) values ('int','unsigned',~0 >> 32);
insert into max_int_values(int_type,extra,max_value) values ('bigint','',~0 >> 1);
insert into max_int_values(int_type,extra,max_value) values ('bigint','unsigned',~0);

select t.table_Schema,t.table_name,c.column_name,c.column_type,
  t.auto_increment,m.max_value,
  round((t.auto_increment/m.max_value)*100,2) as pct_of_values_used,
  m.max_value - t.auto_increment as values_left
from information_schema.tables t
  inner join information_schema.columns c 
    on c.table_Schema = t.table_Schema and c.table_name = t.table_name
  inner join max_int_values m 
    on m.int_type = substr(c.column_type,1,length(m.int_type)) 
    and ((m.extra like '%unsigned') = (c.column_type like '%unsigned'))
where c.extra = 'auto_increment'
order by pct_of_values_used;

Your question seems perfectly reasonable to me. You should be able to get the current auto-increment values for each table from information_schema. I don't think the max values for the various int types are available as constants in MySQL, but Roland Bouman demonstrated a simple way to generate them in MySQL:

In SQL how do I get the maximum value for an integer?

If you put that data into a table, then you can write a single SQL query to get the current auto-increment status of all of your tables so you can see how close you are to running out of values.

Here's a quick-and-dirty example to get you started:

create temporary table max_int_values
(
int_type varchar(10) not null,
extra varchar(8) not null default '',
max_value bigint unsigned not null,
primary key (int_type,max_value),
key int_type (int_type),
key max_value (max_value)
);

insert into max_int_values(int_type,extra,max_value) values ('tinyint','',~0 >> 57);
insert into max_int_values(int_type,extra,max_value) values ('tinyint','unsigned',~0 >> 56);
insert into max_int_values(int_type,extra,max_value) values ('smallint','',~0 >> 49);
insert into max_int_values(int_type,extra,max_value) values ('smallint','unsigned',~0 >> 48);
insert into max_int_values(int_type,extra,max_value) values ('mediumint','',~0 >> 41);
insert into max_int_values(int_type,extra,max_value) values ('mediumint','unsigned',~0 >> 40);
insert into max_int_values(int_type,extra,max_value) values ('int','',~0 >> 33);
insert into max_int_values(int_type,extra,max_value) values ('int','unsigned',~0 >> 32);
insert into max_int_values(int_type,extra,max_value) values ('bigint','',~0 >> 1);
insert into max_int_values(int_type,extra,max_value) values ('bigint','unsigned',~0);

select t.table_Schema,t.table_name,c.column_name,c.column_type,
  t.auto_increment,m.max_value,
  round((t.auto_increment/m.max_value)*100,2) as pct_of_values_used,
  m.max_value - t.auto_increment as values_left
from information_schema.tables t
  inner join information_schema.columns c 
    on c.table_Schema = t.table_Schema and c.table_name = t.table_name
  inner join max_int_values m 
    on m.int_type = substr(c.column_type,1,length(m.int_type)) 
    and ((m.extra like '%unsigned') = (c.column_type like '%unsigned'))
where c.extra = 'auto_increment'
order by pct_of_values_used;
一萌ing 2024-09-07 03:03:32

来自 openark,这是一个用于检查自动增量容量的查询:

SELECT
  TABLE_SCHEMA,
  TABLE_NAME,
  COLUMN_NAME,
  DATA_TYPE,
  COLUMN_TYPE,
  IF(
    LOCATE('unsigned', COLUMN_TYPE) > 0,
    1,
    0
  ) AS IS_UNSIGNED,
  (
    CASE DATA_TYPE
      WHEN 'tinyint' THEN 255
      WHEN 'smallint' THEN 65535
      WHEN 'mediumint' THEN 16777215
      WHEN 'int' THEN 4294967295
      WHEN 'bigint' THEN 18446744073709551615
    END >> IF(LOCATE('unsigned', COLUMN_TYPE) > 0, 0, 1)
  ) AS MAX_VALUE,
  AUTO_INCREMENT,
  AUTO_INCREMENT / (
    CASE DATA_TYPE
      WHEN 'tinyint' THEN 255
      WHEN 'smallint' THEN 65535
      WHEN 'mediumint' THEN 16777215
      WHEN 'int' THEN 4294967295
      WHEN 'bigint' THEN 18446744073709551615
    END >> IF(LOCATE('unsigned', COLUMN_TYPE) > 0, 0, 1)
  ) AS AUTO_INCREMENT_RATIO
FROM
  INFORMATION_SCHEMA.COLUMNS
  INNER JOIN INFORMATION_SCHEMA.TABLES USING (TABLE_SCHEMA, TABLE_NAME)
WHERE
  TABLE_SCHEMA NOT IN ('mysql', 'INFORMATION_SCHEMA', 'performance_schema')
  AND EXTRA='auto_increment'
;

当然,您可以添加一个 ORDER BY AUTO_INCRMENT_RATIO DESC 以及一个 LIMIT 来轻松挑选出最接近其限制的容量。

From openark, here is a single query for checking auto-increment capacity:

SELECT
  TABLE_SCHEMA,
  TABLE_NAME,
  COLUMN_NAME,
  DATA_TYPE,
  COLUMN_TYPE,
  IF(
    LOCATE('unsigned', COLUMN_TYPE) > 0,
    1,
    0
  ) AS IS_UNSIGNED,
  (
    CASE DATA_TYPE
      WHEN 'tinyint' THEN 255
      WHEN 'smallint' THEN 65535
      WHEN 'mediumint' THEN 16777215
      WHEN 'int' THEN 4294967295
      WHEN 'bigint' THEN 18446744073709551615
    END >> IF(LOCATE('unsigned', COLUMN_TYPE) > 0, 0, 1)
  ) AS MAX_VALUE,
  AUTO_INCREMENT,
  AUTO_INCREMENT / (
    CASE DATA_TYPE
      WHEN 'tinyint' THEN 255
      WHEN 'smallint' THEN 65535
      WHEN 'mediumint' THEN 16777215
      WHEN 'int' THEN 4294967295
      WHEN 'bigint' THEN 18446744073709551615
    END >> IF(LOCATE('unsigned', COLUMN_TYPE) > 0, 0, 1)
  ) AS AUTO_INCREMENT_RATIO
FROM
  INFORMATION_SCHEMA.COLUMNS
  INNER JOIN INFORMATION_SCHEMA.TABLES USING (TABLE_SCHEMA, TABLE_NAME)
WHERE
  TABLE_SCHEMA NOT IN ('mysql', 'INFORMATION_SCHEMA', 'performance_schema')
  AND EXTRA='auto_increment'
;

And of course you can add an ORDER BY AUTO_INCREMENT_RATIO DESC with perhaps a LIMIT to easily pick out the ones closest to their limit.

红颜悴 2024-09-07 03:03:32

只需将其设为 int unsigned 并永远忘记它的值即可。

just make it int unsigned and forget about it's value forever.

逆流 2024-09-07 03:03:32

我找到的一个解决方案是使用顾问。

LINK

顾问监视您的数据库的架构、内存使用情况、性能等,

但它还允许您构建适合您特定 MySQL 环境需求的自定义规则。本质上,您可以创建一个自己的监控顾问,我注意到有人在这样做,但没有解释如何做。因此,您可以在即将达到限制时发送警告。

达到限制的解决方案可能是创建另一个具有第二个 ID 的列,使 PK 成为复合主键,从而有效地扩展您拥有的 id 数量,而无需更改数据类型。

例如:

添加名为“Whatever”的第二个 PK ID 字段

,假设您的限制是在自动增量下添加 1024 行,第二列在第一列达到限制后自动增量,将第一列重置为 1 并进行第二次跳转到 2

1023 1
1024 1
1    2
2    2
etc.

需要在其他表中进行引用,但不需要您弄乱数据类型。到目前为止,这不是问题的最佳解决方案,您可能已经想到了更好的解决方案,但我想我可能会添加它,以防万一它让您想到另一个解决方案。

无论哪种方式,但是您决定修复/监视达到的限制,应该计划和创建另一个数据库。监视器或另一个字段,或者无论如何你设法快速修复数据库只是解决当前实际问题的创可贴,因为如果你有其他类似的字段,限制很小,那么你不想成为有一天,不止一个字段决定同时达到其极限。

希望这会有所帮助。

One solution I was able to find was using a an advisor.

LINK

the advisor monitors your database for any changes to the schema, memory usage, performance, etc.

but it also allows you to Build custom Rules tailored to the needs of your particular MySQL environment. In essence you could create a your own monitoring advisor, which I have noticed someone doing, however did not explain how. therefor you could send a warning when you limit is about to be reached.

and a solution to reaching the limit could be to create another column with a second ID, making the PK become a composite primary key effectively extending the amount of id's you have without having to change the data type.

ex:

add second PK ID field called 'Whatever'

lets say your limit is 1024 rows to be added under auto increment, your second column auto increment after the first one has reached it's limit, resetting the first one to 1 and having the second jump to 2

1023 1
1024 1
1    2
2    2
etc.

there would need to be references made in the other tables but doesn't require you to mess with the data type. this is by far not the best solution to the problem, and you might have thought of something better already, but I figure I might add it just in case it makes you think of another solution.

either way, however you decide to fix/monitor the limit being reached another database should be planned out and created. a monitor or a another field or however you managed to fix the database in a hurry is only a band aid on the real problem at hand, because if you have other fields like that, with small limits, then you don't want to be stuck the day that more then one field decides to reach it's limit at the same time.

hope this helps somewhat.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文