将 mysql 表转储到 CSV(stdout),然后将输出通过隧道传输到另一台服务器

发布于 2024-10-17 19:10:23 字数 1971 浏览 0 评论 0原文

我正在尝试将数据库表移动到另一台服务器;复杂的是当前运行该表的机器几乎没有剩余空间;所以我正在寻找一个可以通过网络工作的解决方案。

我尝试过从 src 机器上 mysqldumping 数据库并将其通过管道传输到目标 mysql 中;但我的数据库有 4800 万行,即使关闭 auto_commit & trx_commit cmd 为 2;我有一些狗缓慢的时间。

mysqldump -uuser -ppass --opt dbname dbtable  | mysql -h remove.server  -uuser -pass dbname

然后我尝试一次 mysqldump 一百万行; scp 将它们发送到目标机器并执行 mysql < file.sql 但这似乎变得越来越慢。我到达了第 7 个文件(7,000,000)行;接下来的100万导入耗时240分钟。

我做了一些阅读,mysql 建议使用 CSV LOAD IN FILE 样式导入比插入快约 20 倍。所以现在我被困住了。

我可以弄清楚如何使用标准 sql 语法导出为 CSV:

SELECT *
INTO OUTFILE '/tmp/tmpfile'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
ESCAPED BY '\\'
LINES TERMINATED BY '\n'
FROM table;

但这显然不起作用,因为它会很快耗尽我已经很低的磁盘空间。所以我一直在寻找一个开关,让 mysqldump 将 csv 转储到标准输出。从我读到的来看,这似乎不可能。我能想到的唯一方法是创建一个 FIFO 并指向 mysql 在那里转储 - 然后编写一个脚本,同时读取 FIFO 并将其发送到目标服务器。但不太确定如何同步到其他服务器的语法;这让我想到了下一个问题。

假设我可以让 mysql 将 CSV 转储到标准输出而不是文件;然后我如何将该输出传送到目标服务器?如果我可以简单地在目标服务器上获取单个 csv 文件,我会很高兴,因为它有更多空间;因为这样我就可以简单地从文件中使用 mysqlimport 。

这让我想到了下一点......我很想能够做到这一点:

mysqldump -uuser -ppass --opt dbname --tab /dev/stdout dbtable  | mysqlimport -h remove.server  -uuser -pass dbname 

但看起来 mysqlimport 不支持管道传输;你必须向它传递一个文件。

打字时突然想到;

是否可以使用上面列出的 FIFO 方法?然后让 mysqlimport 从 FIFO 读取并插入到目标服务器?我想唯一的问题是 mysql 转储的速度比导入到目标服务器的速度快;随后填充 src 服务器。

我对如何将 mysql CSV 转储到标准输出并将其通过网络传输到目标服务器(最好同时导入,但很高兴将其转储为目标上的文件)有点迷失。

任何帮助将不胜感激!

干杯, Ben


更新:我正在使用 innodb 表;我无法关闭 src 盒子超过 10 分钟。


更新:我现在使用 sshfs 将目标上的目录安装到 src 上,并让 mysql 将 csv 转储到该文件夹​​中 - 似乎工作得很好。然后只需使用 mysqlimport 将其加载到目标数据库中即可。


更新:所以现在我已经设法将数据放入目标框 - 导入仍然像使用插入完成一样慢。 12 小时内导入了 900 万行。这里有些不对劲。有什么想法吗?


更新:对于那些感兴趣的人...这也不起作用:http://forums.mysql.com/read.php?22,154964

I'm trying to move a database table to another server; the complication is that the machine currently running the table has little to no space left; so I'm looking for a solution that can work over the net.

I have tried mysqldumping the database from the src machine and piping it into mysql at the dest; but my database has 48m rows and even when turning auto_commit off & trx_commit cmd to 2; I am getting some dog slow times.

mysqldump -uuser -ppass --opt dbname dbtable  | mysql -h remove.server  -uuser -pass dbname

I then tried to mysqldump the rows a million at a time; scp them to the dest machine and do a mysql < file.sql but this seemed to get progressivly slower. I reached the 7th file (7,000,000) rows; and the following million import took 240 minutes.

I did a little bit of reading around and mysql suggests that using CSV LOAD IN FILE style imports are ~20x faster than inserts. So now I'm stuck.

I can work out how to export as CSV using the standard sql syntax:

SELECT *
INTO OUTFILE '/tmp/tmpfile'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
ESCAPED BY '\\'
LINES TERMINATED BY '\n'
FROM table;

but this obviously doesn't work as it will quickly chew up my already low disk space. So I was looking for a switch that lets mysqldump dump csv's to stdout. From what I have read it doesn't appear possible. The only way I can think of doing it is creating a FIFO and pointing mysql to dump there - then write a script that reads the FIFO at the same time and sends it to the dest server. Not really sure on the syntax of how to sync to the other server though; which brings me to my next problem.

Assuming I can get mysql to dump CSVs to stdout rather than a file; how do I then pipe that output to the dest server? I'm happy if I can simply get a single csv file on the dest server as it has more space; because then I can simply use mysqlimport from the file.

Which brings me to my next point... I would love to be able to do this:

mysqldump -uuser -ppass --opt dbname --tab /dev/stdout dbtable  | mysqlimport -h remove.server  -uuser -pass dbname 

But it looks like mysqlimport doens't support piping to it; you have to pass it a file.

Just had a thought while typing this;

Would it be possible to use the FIFO method listed above; then get mysqlimport to read from the FIFO and insert into the dest server? I guess the only problem there would be that mysql can dump quicker than it can do the imports to the dest server; subsequently filling up the src server.

I'm a bit lost on how to do a mysql CSV dump to stdout and transfer it over the net to a dest server (preferably importing at the same time, but happy to just dump as a file on the dest).

Any help would be greatly appreciated!

Cheers,
Ben


UPDATE: I'm using innodb tables; and I can't shut the src box down for any period longer than 10mins.


UPDATE: I am now using sshfs to mount a dir on the dest onto the src and getting mysql to dump a csv into that folder - seems to work perfectly. Then its just a matter of using mysqlimport to load it into the database at the dest.


UPDATE: So now I have managed to get the data onto the dest box - the import is still as slow as if it were done with INSERTS. 9m rows imported in 12 hours. Something isn't right here. Any ideas?


UPDATE: For those interested... This doesn't work either: http://forums.mysql.com/read.php?22,154964

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

二智少女猫性小仙女 2024-10-24 19:10:23

你需要 mysqlhostcopy 支持scp,当然,服务器之间的物理距离会导致流量问题

you would need mysqlhostcopy which support scp, of course the physical distance between the servers is going causing problem on traffic

早茶月光 2024-10-24 19:10:23

事实证明问题出在我插入的主机上。内存不足+机器速度慢导致查询备份。

Turns out the problem was with the host I was inserting into. Not enough RAM + slow machine caused the queries to back up.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文