PHP/passthru/mysqldump 似乎超时

发布于 2024-08-16 11:22:23 字数 597 浏览 9 评论 0原文

我有一个 PHP 脚本,我调用它来运行 MySQL 数据库备份到 .sql 文件、TAR/GZip 并将它们通过电子邮件发送给我。其中一个数据库由与提供 Web 服务器的提供商不同的提供商托管。一切都托管在 Linux/Unix 上。当我运行此命令时:(

$results = exec("mysqldump -h $dbhost -u $dbuser -p$dbpass $dbname > $backupfile", $output, $retval);

仅供参考,我还尝试使用 system()、passthru() 和 shell_exec()。)

我的浏览器加载页面 15-20 秒,然后停止而不进行处理。当我使用 FTP 客户端查看服务器时,我可以看到生成的文件在几秒钟后显示,然后文件大小不断增加,直到数据库备份为止。因此,备份文件已创建,但在文件被压缩并发送给我之前脚本停止工作。

我检查了 PHP 中的 max_execution_time 变量,将其设置为 30 秒(比页面停止工作所需的时间长),并将 set_time_limit 值设置为长达200秒。

有人知道这里发生了什么事吗?

I've got a PHP script that I call to run MySQL database backups to .sql files, TAR/GZip them and e-mail them to me. One of the database is hosted by a different provider than the one providing the web server. Everything is hosted on Linux/Unix. When I run this command:

$results = exec("mysqldump -h $dbhost -u $dbuser -p$dbpass $dbname > $backupfile", $output, $retval);

(FYI, I've also tried this with system(), passthru() and shell_exec().)

My browser loads the page for 15-20 seconds and then stops without processing. When I look at the server with an FTP client, I can see the resulting file show up a few seconds later and then the file size builds until the database is backed up. So, the backup file is created but the script stops working before the file can be compressed and sent to me.

I've checked the the max_execution_time variable in PHP and it's set to 30 seconds (longer than it takes for the page to stop working) and have set the set_time_limit value to as much as 200 seconds.

Anyone have any idea what's going on here?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

殊姿 2024-08-23 11:22:23

您使用的是共享主机还是您自己的服务器?如果是前者,您的托管提供商可能已将最大执行时间设置为 15-20 秒,并将其设置为不能被覆盖(我对 1&1 和这些类型的脚本有这个问题)。

Are you on shared hosting or are these your own servers? If the former your hosting provider may have set the max execution time to 15-20secs and set it so it cannot be overridden (I have this problem with 1&1 and these type of scripts).

懵少女 2024-08-23 11:22:23

使用 phpinfo() 调用重新检查与执行时间相关的参数...也许这就是 Paolo 所写的内容。

Re-check the execution-time-related parameters with a phpinfo() call... maybe it's all about what Paolo writes.

宛菡 2024-08-23 11:22:23

也可能是(反向)代理在一段时间不活动后放弃。虽然可能性不大,但无论如何……尝试一下,

// test A
$start = time();
sleep(20);
$stop = time();
echo $start, ' ', $stop;

如果

// test B
for($i=0; $i<20; $i++) {
  sleep(1);
  echo time(), "\n";
}

第一个超时,而第二个没有超时,我会称这不是证据,而是证据。

Could also be a (reverse) proxy that is giving up after a certain period of inactivity. Granted it's a long shot but anyway.... try

// test A
$start = time();
sleep(20);
$stop = time();
echo $start, ' ', $stop;

and

// test B
for($i=0; $i<20; $i++) {
  sleep(1);
  echo time(), "\n";
}

If the first one times out and the second doesn't I'd call that not proof but evidence.

虚拟世界 2024-08-23 11:22:23

也许提供商已经设置了超出 php.ini 设置的另一个资源限制。
尝试

<?php passthru('ulimit -a');

如果该命令可用,它应该打印资源及其限制的列表,例如,

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 4095
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4095
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

也许您会发现一些比共享服务器上的限制更多的设置。

Maybe the provider has set another resource limit beyond the php.ini setting.
Try

<?php passthru('ulimit -a');

If the command is available it should print a list of resources and their limits, e.g.

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 4095
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4095
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Maybe you find some more restrictive settings than that on your shared server.

南城追梦 2024-08-23 11:22:23
  1. 进行手动转储并将其与损坏的转储进行比较。这可能会告诉您 mysqldump 在哪个点停止/崩溃
  2. 记录 mysqldump 输出,如 mysqldump ... 2>/tmp/dump.log
  3. 考虑分离执行 mysqldump,以便在转储完成之前将控制权返回给 PHP

考虑 , mysqldump -Q 几乎总是一个好主意

  1. Do a manual dump and diff it against the broken one. This may tell you at which point mysqldump stops/crashes
  2. Consider logging mysqldump output, as in mysqldump ... 2>/tmp/dump.log
  3. Consider executing mysqldump detached so that control is returned to PHP before the dump is finished

On a side note, it is almost always a good idea to mysqldump -Q

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文