通过php进行数据库迁移

发布于 2024-09-08 19:34:43 字数 246 浏览 1 评论 0原文

我有一个 mysql 数据库,我正在尝试将其迁移到另一个数据库中。它们有不同的模式,我为旧数据库的每个表编写了一个 php 脚本,以便将其数据填充到新数据库中。该脚本工作得很好,但问题是它没有移动所有数据。例如,如果我有一个表,并且它的所有信息都被选择,然后插入到新表中,但只完成了一半。我这样做的方式是打开一个数据库,选择 * 并将其放入关联数组中。然后我关闭数据库连接并连接到另一个数据库连接,遍历数组的每个元素并将它们插入新的元素中。数组的大小有限制吗?这里出了什么问题?

I have a mysql database that I am trying to migrate into another database. THey have different schema's and I have written a php script for each table of the old database in order to populate its data in to the new one. The script works just fine but the problem is that it does not move all the data. for example if I have a table and all its info are being selected and then inserted into the new table but only half of them are done. The way I am doing it I am opening a database selecting * and puting it in an associative array. then I close the db connection and connect to the other one go through each element of the array and insert them in the new one. Is there a limit to how big an array could be? what is wrong here?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

一身骄傲 2024-09-15 19:34:43

您应该分块(例如 1000 行)从第一个数据库读取行,将这些行写入第二个数据库,清理数组(使用 unset() 或空数组)并重复该过程,直到读取所有行。
这克服了内存限制。

另一个问题可能是脚本运行时间太长(如果表太大),因此尝试使用函数 set_time_limit()。此函数重置脚本的超时,超过该超时后脚本应被终止。我建议在处理每个块后调用它。

You should read the rows from the first database in chunks (of 1000 rows for example), write those rows to the second database, clean the array (with unset() or an empty array) and repeat the process until you read all the rows.
This overcomes the memory limitations.

Another problem might be that the script is running for too long (if the table is too large), so try using the function set_time_limit(). This function resets the timeout for a script after which it should be terminated. I suggest calling it after processing each chunk.

小草泠泠 2024-09-15 19:34:43

首先,我不认为编写脚本来执行此操作有什么意义。为什么不从 phpMyAdmin 获取 SQL 转储并对其进行编辑以使其适合其他数据库?或者说他们有那么不同吗?

但回答你的问题:我的第一个想法是,就像其他人已经说过的那样,问题是时间限制。在尝试执行此操作之前,您应该检查 php.ini 中 max_execution_time 的值(大多数情况下约为 30 秒)以及脚本执行所需的时间。如果它在大约 30 秒后终止(或者 max_execution_time 的值,如果不同的话),那么很可能这就是问题所在,尽管 php 应该抛出错误(或至少是警告)。

我认为 php 中数组的大小没有限制。但是,php.ini 中有一个指令,即 memory_limit,它定义了脚本可以使用的内存量。

如果您有权访问 php.ini 文件,我建议将 max_execution_timememory_limit 设置为更高的值。如果您无权访问 php.ini,则无法更改 memory_limit 指令。您必须想办法解决这个问题,例如在 SQL 中使用 LIMIT。请务必取消设置您使用的变量,否则您可能会遇到同样的问题。

First of all, I don't see the point in writing a script to do this. Why don't you just get a SQL dump from phpMyAdmin and edit it so that it fits the other database? Or are they that different?

But to reply on your question: my first thought would be, like other people already said, that the problem would be the time limit. Before you try to do something about this, you should check the value of max_execution_time in php.ini (this is about 30 seconds most of the time) and how long it takes for the script to execute. If it terminates after roughly 30 seconds (or the value of max_execution_time if it's different), then it's likely that that's the problem, although php should throw an error (or at least a warning).

I don't think there's a limit on the size of an array in php. However, there is a directive in php.ini, namely memory_limit that defines the amount of memory a script can use.

If you are have acces to your php.ini file, I suggest setting both max_execution_time and memory_limit to a higher value. If you don't have acces to php.ini, you won't be able to change the memory_limit directive. You will have to work your way around this, for example by using LIMIT in your SQL. Be sure to unset your used variables, or you could run in to the same problem.

凡尘雨 2024-09-15 19:34:43

目标数据库中可能存在限制,拒绝您尝试的某些插入。

You may have constraints in the target database that are rejecting some of your attempted inserts.

囍孤女 2024-09-15 19:34:43

为什么不通过 sql 脚本来做到这一点?

如果您更喜欢通过 php 执行此操作,那么您可以打开与两个数据库的连接,并在从源读取时插入到目标。这样你就可以避免使用太多内存。

Why not do this via sql scripts?

If you prefer to do it via php then you could open connections to both databases and insert to target as you read from source. That way you can avoid using too much memory.

時窥 2024-09-15 19:34:43

使用 php 来执行转换/转换逻辑是一种可能性。如果您正在进行复杂的转换并且您的 php 技能比您的 mysql 技能好得多,我会这样做。

如果您的 php 脚本需要更多内存,请使用:

memory_limit = 2048M
max_execution_time = 3600

这将为您提供 2GB 的可能空间用于数组以及大约一个小时的处理时间。但如果你的数据库真的这么大,使用起来会快得多(真的很多):

1.
mysqldump,转储源服务器

在此处检查:http ://dev.mysql.com/doc/refman/5.1/en/mysqldump.html

2.
上传转储文件并导入它。 mysql 文档页面上有很多示例。 (另请参阅评论)。

之后,您可以通过 CREATE/SELECT 语句转换数据库。

创建表一 SELECT * FROM 二;

作为替代方案,您可以使用 UPDATE 语句。什么是最好的很大程度上取决于您正在从事的工作类型。

祝你好运!

Using php to do the transform/convert logic is a possibility. I would do it, if you are doing complex transformations and if your php skills are much better thant your mysql skillset.

If you need more memory in your php script use:

memory_limit = 2048M
max_execution_time = 3600

This will give you 2gigs of possible space for the array and about an hour for processing. But if your database is really this big, it would much (really a lot) much faster to use:

1.
mysqldump, to make a dump of your source-server

Check it here: http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html

2.
Upload the dumpfile and iport it. There are a bunch of example on the mysql documentation page. (Look also in the comments).

After this you can transform your database through CREATE/SELECT-statements.

CREATE TABLE one SELECT * FROM two;

As an alternative you can use UPDATE-statements. What is best depends heavily on the kind of job that you are doing.

Good luck!

寒冷纷飞旳雪 2024-09-15 19:34:43

最好在命令行执行 mysql 转储:

mysqldump -a -u USER_NAME -p SOURCE_DATABASE_NAME > DATA.mysql

您还可以 gzip 文件以使其更小,以便传输到另一台服务器:

gzip DATA.mysql

传输后,解压缩文件:

gunzip -f DATA.mysql.gz

并导入它:

mysql -u USER_NAME -p TARGET_DATABASE_NAME < ;数据.sql

It would be preferable to do a mysql dump at the command line:

mysqldump -a -u USER_NAME -p SOURCE_DATABASE_NAME > DATA.mysql

You can also gzip the file to make it smaller for transfer to another server:

gzip DATA.mysql

After transfer, unzip the file:

gunzip -f DATA.mysql.gz

And import it:

mysql -u USER_NAME -p TARGET_DATABASE_NAME < DATA.sql

小猫一只 2024-09-15 19:34:43

您的服务器(与所有服务器一样)将有 PHP 内存限制 - 如果您使用的内存超过指定的限制,则脚本将失败。

是否可以将当前的 MySQL 数据库转储到文本文件中,执行查找和替换或基于 RegExp 的替换来更改文本文件中的架构,然后将修改后的测试文件重新加载到 MySQL 中以完成更改?如果这是一次性迁移,那么这可能是一种更好的方法。

Your server (as all server do) will have a memory limit for PHP - if you use more than the assigned limit, then the script will fail.

Is it possible to just Dump the current MySQL Database into text files, perform find-and-replaces or RegExp-based replacements to change the schemas within the text files, and then reload the amended test files into MySQL to complete the change? If this is a one-off migration, then it may be a better way to do it.

关于从前 2024-09-15 19:34:43

您可能会遇到 PHP 的执行时间或内存限制。确保 php.ini 中的适当设置足够高以允许脚本完成执行。

You may be running into PHP's execution time or memory limits. Make sure the appropriate settings in php.ini are high enough to allow the script to finish executing.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文