MySQL 数据库的最大表大小

发布于 2024-07-04 11:28:34 字数 104 浏览 5 评论 0原文

MySQL 表的最大大小是多少? 50GB是200万吗? 80GB 500 万?

在规模的高端,我是否需要考虑压缩数据? 或者如果桌子变得太大,也许可以拆分桌子?

What is the maximum size for a MySQL table? Is it 2 million at 50GB? 5 million at 80GB?

At the higher end of the size scale, do I need to think about compressing the data? Or perhaps splitting the table if it grew too big?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

一紙繁鸢 2024-07-11 11:28:34

关于你的第一个问题,数据库的有效最大大小通常由操作系统决定,特别是MySQL服务器能够创建的文件大小,而不是由MySQL服务器本身决定。 这些限制在大小限制中发挥着重要作用。 而且MyISAM的工作方式与InnoDB不同。 因此任何表格都将取决于这些限制。

如果您使用 InnoDB,您将有更多操作表大小的选项,在这种情况下调整表空间的大小是一个选项,因此如果您打算调整它的大小,这就是正确的方法。 查看表已满错误页面。

我不确定每个表的实际记录数量,给出所有必要的信息(操作系统、表类型、列、数据类型和每个表的大小等...)并且我不确定这些信息是否容易计算,但我在一些情况下,我们看到了包含大约 1bi 条记录的简单表,但 MySQL 并没有放弃。

About your first question, the effective maximum size for the database is usually determined by operating system, specifically the file size MySQL Server will be able to create, not by MySQL Server itself. Those limits play a big role in table size limits. And MyISAM works differently from InnoDB. So any tables will be dependent on those limits.

If you use InnoDB you will have more options on manipulating table sizes, resizing the tablespace is an option in this case, so if you plan to resize it, this is the way to go. Give a look at The table is full error page.

I am not sure the real record quantity of each table given all necessary information (OS, Table type, Columns, data type and size of each and etc...) And I am not sure if this info is easy to calculate, but I've seen simple table with around 1bi records in a couple cases and MySQL didn't gave up.

微凉徒眸意 2024-07-11 11:28:34

我曾经使用过一个非常大(Terabyte+)的 MySQL 数据库。 我们拥有的最大的表实际上超过了十亿行。

有效。 MySQL 大部分时间都能正确处理数据。 但它非常笨重。

仅备份和存储数据就是一个挑战。 如果需要的话,需要几天的时间才能恢复该表。

我们有许多表,行数范围在 10 到 1 亿之间。 对表的任何重要连接都太耗时,并且会花费很长时间。 因此,我们编写了存储过程来“遍历”表并处理针对“id”范围的连接。 通过这种方式,我们一次处理 10-100,000 行数据(加入 id 的 1-100,000,然后 100,001-200,000,等等)。 这比连接整个表要快得多。

在不基于主键的非常大的表上使用索引也要困难得多。 Mysql 将索引分为两部分——它将索引(主索引除外)存储为主键值的索引。 因此,索引查找分两部分完成:首先 MySQL 访问索引并从中提取需要查找的主键值,然后对主键索引进行第二次查找以查找这些值的位置。

这样做的最终结果是,对于非常大的表(1-2 亿行以上),针对表的索引更具限制性。 您需要更少、更简单的索引。 即使执行不直接在索引上的简单选择语句也可能永远不会返回。 Where 子句必须命中索引,否则就忘记它。

但话虽如此,事情确实有效。 我们能够使用 MySQL 处理这些非常大的表并进行计算并获得正确的答案。

I once worked with a very large (Terabyte+) MySQL database. The largest table we had was literally over a billion rows.

It worked. MySQL processed the data correctly most of the time. It was extremely unwieldy though.

Just backing up and storing the data was a challenge. It would take days to restore the table if we needed to.

We had numerous tables in the 10-100 million row range. Any significant joins to the tables were too time consuming and would take forever. So we wrote stored procedures to 'walk' the tables and process joins against ranges of 'id's. In this way we'd process the data 10-100,000 rows at a time (Join against id's 1-100,000 then 100,001-200,000, etc). This was significantly faster than joining against the entire table.

Using indexes on very large tables that aren't based on the primary key is also much more difficult. Mysql stores indexes in two pieces -- it stores indexes (other than the primary index) as indexes to the primary key values. So indexed lookups are done in two parts: First MySQL goes to an index and pulls from it the primary key values that it needs to find, then it does a second lookup on the primary key index to find where those values are.

The net of this is that for very large tables (1-200 Million plus rows) indexing against tables is more restrictive. You need fewer, simpler indexes. And doing even simple select statements that are not directly on an index may never come back. Where clauses must hit indexes or forget about it.

But all that being said, things did actually work. We were able to use MySQL with these very large tables and do calculations and get answers that were correct.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文