file_get_contents() 的更快替代方案

发布于 2024-08-29 11:21:37 字数 965 浏览 3 评论 0原文

目前,我正在使用 file_get_contents() 将 GET 数据提交到网站数组,但在执行页面时,我收到此错误:

Fatal error: Maximumexecution time of 30秒超过

我真正希望脚本做的就是开始加载网页,然后离开。每个网页可能需要长达 5 分钟才能完全加载,而我不需要它完全加载。

这是我目前拥有的:

        foreach($sites as $s) //Create one line to read from a wide array
        {
                file_get_contents($s['url']); // Send to the shells
        }

编辑:为了消除任何混乱,该脚本用于启动其他服务器上的脚本,不返回任何数据。

编辑:我现在尝试使用 cURL 来实现这一目的,通过设置一秒的超时来使其发送数据然后停止。这是我的代码:

        $ch = curl_init($s['url']); //load the urls
        curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 1); //Only send the data, don't wait.
        curl_exec($ch); //Execute
        curl_close($ch); //Close it off.

也许我设置了错误的选项。在我们说话的时候我正在浏览一些手册。只是给你一个更新。感谢迄今为止所有帮助我的人。

编辑:啊,发现问题了。我使用的是 CURLOPT_CONNECTTIMEOUT 而不是 CURLOPT_TIMEOUT。哎呀。

但现在,脚本没有触发。他们各自使用ignore_user_abort(TRUE);所以我无法理解这个问题

,哈哈,从头开始。现在可以工作了。非常感谢大家

Currently I'm using file_get_contents() to submit GET data to an array of sites, but upon execution of the page I get this error:

Fatal error: Maximum execution time of 30 seconds exceeded

All I really want the script to do is start loading the webpage, and then leave. Each webpage may take up to 5 minutes to load fully, and I don't need it to load fully.

Here is what I currently have:

        foreach($sites as $s) //Create one line to read from a wide array
        {
                file_get_contents($s['url']); // Send to the shells
        }

EDIT: To clear any confusion, this script is being used to start scripts on other servers, that return no data.

EDIT: I'm now attempting to use cURL to do the trick, by setting a timeout of one second to make it send the data and then stop. Here is my code:

        $ch = curl_init($s['url']); //load the urls
        curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 1); //Only send the data, don't wait.
        curl_exec($ch); //Execute
        curl_close($ch); //Close it off.

Perhaps I've set the option wrong. I'm looking through some manuals as we speak. Just giving you an update. Thank you all of you that are helping me thus far.

EDIT: Ah, found the problem. I was using CURLOPT_CONNECTTIMEOUT instead of CURLOPT_TIMEOUT. Whoops.

However now, the scripts aren't triggering. They each use ignore_user_abort(TRUE); so I can't understand the problem

Hah, scratch that. Works now. Thanks a lot everyone

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

与往事干杯 2024-09-05 11:21:37

有很多方法可以解决这个问题。

您可以使用 cURL 及其curl_multi_* 函数来异步执行请求。或者使用常见的 cURL 方式,但使用 1 作为超时限制,因此会请求并返回超时,但请求会被执行。

如果您没有安装 cURL,您可以继续使用 file_get_contents,但使用类似 ZendX_Console_Process_Unix 这样你就可以避免每个请求之间的等待。

There are many ways to solve this.

You could use cURL with its curl_multi_* functions to execute asynchronously the requests. Or use cURL the common way but using 1 as timeout limit, so it will request and return timeout, but the request will be executed.

If you don't have cURL installed, you could continue using file_get_contents but forking processes (not so cool, but works) using something like ZendX_Console_Process_Unix so you avoid the waiting between each request.

请你别敷衍 2024-09-05 11:21:37

正如 Franco 提到的(我不确定是否被接受),您特别想使用curl_multi 函数,而不是常规的curl 函数。这会将多个curl对象打包到一个curl_multi对象中并同时执行它们,在响应到达时返回(或不返回)响应。

示例位于 http://php.net/curl_multi_init

As Franco mentioned and I'm not sure was picked up on, you specifically want to use the curl_multi functions, not the regular curl ones. This packs multiple curl objects into a curl_multi object and executes them simultaneously, returning (or not, in your case) the responses as they arrive.

Example at http://php.net/curl_multi_init

﹏半生如梦愿梦如真 2024-09-05 11:21:37

重新更新您只需要触发操作:

您可以尝试使用带有超时的file_get_contents。这将导致远程脚本被调用,但连接在 n 秒(例如 1)后终止。

如果远程脚本配置为即使连接中止它也会继续运行(在 PHP 中,这将是 ignore_user_abort),它应该可以工作。

尝试一下。如果它不起作用,您将无法增加 time_limit 或使用外部可执行文件。但从你所说的——你只需要提出请求——这应该可行。您甚至可以尝试将超时设置为 0 但我不相信这一点。

来自这里

<?php
$ctx = stream_context_create(array(
    'http' => array(
        'timeout' => 1
        )
    )
);
file_get_contents("http://example.com/", 0, $ctx);
?>

公平地说,克里斯已经回答了包括这种可能性:curl也有一个超时开关。

Re your update that you only need to trigger the operation:

You could try using file_get_contents with a timeout. This would lead to the remote script being called, but the connection being terminated after n seconds (e.g. 1).

If the remote script is configured so it continues to run even if the connection is aborted (in PHP that would be ignore_user_abort), it should work.

Try it out. If it doesn't work, you won't get around increasing your time_limit or using an external executable. But from what you're saying - you just need to make the request - this should work. You could even try to set the timeout to 0 but I wouldn't trust that.

From here:

<?php
$ctx = stream_context_create(array(
    'http' => array(
        'timeout' => 1
        )
    )
);
file_get_contents("http://example.com/", 0, $ctx);
?>

To be fair, Chris's answer already includes this possibility: curl also has a timeout switch.

梦幻之岛 2024-09-05 11:21:37

消耗那么多时间的不是 file_get_contents() 而是网络连接本身。
考虑不要向一系列站点提交 GET 数据,而是创建一个 rss 并让他们获取 RSS 数据。

it is not file_get_contents() who consume that much time but network connection itself.
Consider not to submit GET data to an array of sites, but create an rss and let them get RSS data.

水晶透心 2024-09-05 11:21:37

我不完全理解你的脚本背后的含义。
但您可以这样做:

  1. 为了快速避免致命错误,您可以在文件开头添加 set_time_limit(120) 。这将使脚本运行 2 分钟。当然,您可以使用任何您想要的数字,0 表示无限。
  2. 如果您只需要调用 url 并且不“关心”结果,则应该在异步模式下使用 cUrl。在这种情况下,对 URL 的任何调用都不会等到完成。您可以非常快速地呼叫它们。

BR。

I don't fully understands the meaning behind your script.
But here is what you can do:

  1. In order to avoid the fatal error quickly you can just add set_time_limit(120) at the beginning of the file. This will allow the script to run for 2 minutes. Of course you can use any number that you want and 0 for infinite.
  2. If you just need to call the url and you don't "care" for the result you should use cUrl in asynchronous mode. This case any call to the URL will not wait till it finished. And you can call them all very quickly.

BR.

厌倦 2024-09-05 11:21:37

如果远程页面加载时间长达 5 分钟,您的 file_get_contents 将等待 5 分钟。有什么方法可以修改远程脚本以分叉到后台进程并在那里进行繁重的处理吗?这样,您的初始点击几乎会立即返回,而不必等待启动期。

另一种可能性是调查 HEAD 请求是否可以解决问题。 HEAD 不返回任何数据,仅返回标头,因此可能足以触发远程作业而无需等待完整输出。

If the remote pages take up to 5 minutes to load, your file_get_contents will sit and wait for that 5 minutes. Is there any way you could modify the remote scripts to fork into a background process and do the heavy processing there? That way your initial hit will return almost immediately, and not have to wait for the startup period.

Another possibility is to investigate if a HEAD request would do the trick. HEAD does not return any data, just headers, so it may be enough to trigger the remote jobs and not wait for the full output.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文