将大的 MyISAM 转换为 InnoDB
我正在尝试将 1000 万行的 MySQL MyISAM 表转换为 InnoDB。
我尝试了 ALTER TABLE ,但这使我的服务器卡住了,所以我手动终止了 mysql。推荐的方法是什么?
我考虑过的选项:
1.新建一张InnoDB表,每次插入部分数据。
2. 将表转储到文本文件中,然后执行 LOAD FILE
3.再试一次,让服务器保持无响应,直到他完成(我尝试了 2 小时,服务器是生产服务器,所以我更愿意让它保持运行)
4.复制表,删除索引,然后转换,然后添加索引
I'm trying to convert a 10million rows MySQL MyISAM table into InnoDB.
I tried ALTER TABLE
but that made my server get stuck so I killed the mysql manually. What is the recommended way to do so?
Options I've thought about:
1. Making a new table which is InnoDB and inserting parts of the data each time.
2. Dumping the table into a text file and then doing LOAD FILE
3. Trying again and just keep the server non-responsive till he finishes (I tried for 2hours and the server is a production server so I prefer to keep it running)
4. Duplicating the table, Removing its indexes, then converting, and then adding indexes
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
更改表的引擎需要重写表,这就是表长时间不可用的原因。删除索引,然后转换,然后添加索引,可能会加快初始转换速度,但添加索引会在表上创建读锁,因此最终的效果是相同的。制作新表并传输数据是正确的方法。通常,这分两部分完成 - 首先复制记录,然后重播复制记录时所做的任何更改。如果您可以在保留读取的同时禁用表中的插入/更新,那么这不是问题。如果没有,有几种可能的解决方案。其中之一是使用 facebook 的 在线架构更改工具。另一种选择是将应用程序设置为在迁移记录时写入两个表,而不是仅切换到新记录。这取决于应用程序代码,关键部分是处理唯一键/重复项,因为在旧表中您可以更新记录,而在新表中您需要插入它。 (这里事务隔离级别也可能起着至关重要的作用,尽可能降低它)。 “经典”方法是使用复制,据我所知,复制也分两部分完成 - 开始复制,记录主服务器位置,然后在第二个服务器中导入数据库转储,然后将其作为从属服务器启动赶上变化。
Changing the engine of the table requires rewrite of the table, and that's why the table is not available for so long. Removing indexes, then converting, and adding indexes, may speed up the initial convert, but adding index creates a read lock on your table, so the effect in the end will be the same. Making new table and transferring the data is the way to go. Usually this is done in 2 parts - first copy records, then replay any changes that were done while copying the records. If you can afford disabling inserts/updates in the table, while leaving the reads, this is not a problem. If not, there are several possible solutions. One of them is to use facebook's online schema change tool. Another option is to set the application to write in both tables, while migrating the records, than switch only to the new record. This depends on the application code and crucial part is handling unique keys / duplicates, as in the old table you may update record, while in the new you need to insert it. (here transaction isolation level may also play crucial role, lower it as much as you can). "Classic" way is to use replication, which, as far as I know is also done in 2 parts - you start replication, recording the master position, then import dump of the database in the second server, then start it as a slave to catch up with changes.
您是否尝试过先通过 PK 对数据进行排序?例如:
应该加快转换速度。
Have you tried to order your data first by the PK ? e.g:
should speed up the conversion.