如何增加发生读取超时错误的时间?

发布于 2024-08-26 18:08:07 字数 559 浏览 6 评论 0原文

我用PHP编写了一个需要很长时间才能执行的脚本[对数千张图片进行图像处理]。这是一米小时 - 可能是 5 小时。

处理 15 分钟后,我收到错误:


错误 无法检索请求的 URL

尝试检索 URL 时遇到以下错误:我单击的 URL

Read Timeout

系统返回:[无错误]

等待从网络读取数据时发生超时。网络或服务器可能出现故障或拥塞。请重试您的请求。

您的缓存管理员是网站管理员。


我需要的是使该脚本运行更长时间。

现在,以下是所有技术信息: 我正在使用 PHP 编写并使用 Zend Framework。我正在使用火狐浏览器。处理的长脚本是在单击链接后完成的。显然,由于脚本尚未结束,我看到了链接所在的网页,并且网络浏览器写着“等待...”。 15 分钟后出现错误。

我尝试对 Firefox throw about:config 进行更改,但没有成功。我不知道,但其他地方可能需要进行更改。

那么,有什么想法吗?

先谢谢了。

I've written in PHP a script that takes a long time to execute [Image processing for thousands of pictures]. It's a meter of hours - maybe 5.

After 15 minutes of processing, I get the error:


ERROR
The requested URL could not be retrieved

The following error was encountered while trying to retrieve the URL: The URL which I clicked

Read Timeout

The system returned: [No Error]

A Timeout occurred while waiting to read data from the network. The network or server may be down or congested. Please retry your request.

Your cache administrator is webmaster.


What I need is to enable that script to run for much longer.

Now, here are all the technical info:
I'm writing in PHP and using the Zend Framework. I'm using Firefox. The long script that is processed is done after clicking a link. Obviously, since the script is not over I see the web page on which the link was and the web browser writes "waiting for ...".
After 15 minutes the error occurs.

I tried to make changes to Firefox threw about:config but without any success. I don't know, but the changes might be needed somewhere else.

So, any ideas?

Thanks ahead.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

左秋 2024-09-02 18:08:07

set_time_limit(0)只会影响脚本的服务器端运行。您收到的错误纯粹是浏览器端的。您必须发送一些内容来防止浏览器确定连接已断开 - 即使是单个输出字符(后跟 flush() 以确保它实际上通过线路发送出去)也可以。也许每个处理的图像都会处理一次,或者以固定的时间间隔处理一次(如果最后一个字符在超过 5 分钟前发送,则输出另一个)。

如果您不需要任何中间输出,可以执行 ignore_user_abort(TRUE),这将允许脚本继续运行,即使连接从客户端关闭。

set_time_limit(0) will only affect the server-side running of the script. The error you're receiving is purely browser-side. You have to send SOMETHING to keep the browser from deciding the connection's dead - even a single character of output (followed by a flush() to make sure it actually get sent out over the wire) will do. Maybe once every image that's processed, or on a fixed time interval (if last char sent more than 5 minutes ago, output another one).

If you don't want any intermediate output, you could do ignore_user_abort(TRUE), which will allow the script to keep running even if the connection gets shut down from the client side.

别在捏我脸啦 2024-09-02 18:08:07

如果该过程运行几个小时,那么您可能应该考虑批处理。因此,您只需存储图像处理请求(在文件、数据库或任何适合您的内容中),而不是启动图像处理。然后,该请求由服务器上运行的计划 (cron) 进程接收,该进程将执行实际处理(可以是 PHP 脚本,它调用 set_time_limit(0))。当处理完成时,您可以向用户发出信号(通过邮件或任何其他适合您的方式)处理已完成。

If the process runs for hours then you should probably look into batch processing. So you just store a request for image processing (in a file, database or whatever works for you) instead of starting the image processing. This request is then picked up by a scheduled (cron) process running on the server, which will do the actual processing (this can be a PHP script, which calls set_time_limit(0)). And when processing is finished you could signal the user (by mail or any other way that works for you) that the processing is finished.

他不在意 2024-09-02 18:08:07

如果您可以分批分割工作,则在处理 X 图像后,会显示带有一些 javascript(或 META 重定向)的页面,以打开链接 http://server/controller/action/nextbatch/next_batch_id

冲洗并重复。

If you can split your work in batches, after processing X images display the page with some javascript (or META redirects) on it to open the link http://server/controller/action/nextbatch/next_batch_id.

Rinse and repeat.

骄兵必败 2024-09-02 18:08:07

批处理整个过程还有一个额外的好处,即一旦出现问题,您不必重新开始整个过程​​。

如果您在自己的服务器上运行并且可以退出安全模式,那么您还可以分叉后台进程来完成实际的繁重工作,而与浏览器的视图无关。如果您处于多核或多处理器环境中,您甚至可以随时安排多个正在运行的进程。

我们已经为大型计算脚本做了类似的事情;进程的同步发生在共享数据库上——但幸运的是,它们的进程是如此独立,我们唯一需要看到的是它们的完成或终止。

batching the entire process also has the added benefit that once something goes wrong, you don't have to start out the entire thing anew.

If you're running on a server of your own and can get out of safe_mode, then you could also fork background processes to do the actual heavy lifting, independent of your browser view of things. If you're in a multicore or multiprocessor environment, you can even schedule more than one running process at any time.

We've done something like that for large computation scripts; synchronization of the processes happened over a shared database---but luckily enough, they processes were so independent that the only thing we needed to see was their completion or termination.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文