如何运行按顺序处理的多个curl 请求?
假设我是一个 Unix 新手:
我正在通过 cron 运行一个curl请求每 15 分钟一班。
Curl 基本上用于加载给定一些参数的网页(PHP),充当如下脚本:
curl http://example.com/?update_=1
我想要实现的是使用这种curl 技术运行另一个“脚本”,
- 每次在另一个脚本运行
- 之前运行其他脚本
我已经读到,curl 在一个命令中接受多个 URL,但我不确定这是否会按顺序或“并行”处理 URL。
Assuming I'm a big Unix rookie:
I'm running a curl request through cron every 15 minutes.
Curl basically is used to load a web page (PHP) that given some arguments, acts as a script like:
curl http://example.com/?update_=1
What I would like to achieve is to run another "script" using this curl technique,
- every time the other script is run
- before the other script is run
I have read that curl accepts multiple URLs in one command, but I'm unsure if this would process the URLs sequentially or in "parallel".
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(8)
它很可能会按顺序处理它们(为什么不直接测试它)。但您也可以这样做:
创建一个名为
curlrequests.sh
的文件
将其放入如下文件中:
保存文件并使用
chmod
使其可执行:运行文件:
<前><代码>./curlrequests.sh
或者
作为旁注,您可以使用
&&
链接请求,如下所示:并使用
&
并行执行:It would most likely process them sequentially (why not just test it). But you can also do this:
make a file called
curlrequests.sh
put it in a file like thus:
save the file and make it executable with
chmod
:run your file:
or
As a side note, you can chain requests with
&&
, like this:And execute in parallel using
&
:根据 curl 手册页:
因此,最简单和最有效的(curl 将把它们全部发送到单个 TCP 连接 [那些到同一来源])方法是将它们全部放在一次curl 调用上,例如:
According to the curl man page:
So the simplest and most efficient (curl will send them all down a single TCP connection [those to the same origin]) approach would be put them all on a single invocation of curl e.g.:
这里没有提到的另一个重要方法是使用相同的 TCP 连接来处理多个 HTTP 请求,并且为此使用一个curl 命令。
这对于节省网络带宽、客户端和服务器资源以及总体上使用多个curl命令的需要非常有用,因为curl默认情况下会在到达命令结束时关闭连接。
对于运行 Web 应用程序的标准客户端来说,保持连接打开并重用它是很常见的。
从 curl 版本 7.36.0 开始,
--next
或-:
命令行选项允许链接多个请求,并且可以在命令中使用- 行和脚本。例如:
来自 curl 手册页:
Another crucial method not mentioned here is using the same TCP connection for multiple HTTP requests, and exactly one curl command for this.
This is very useful to save network bandwidth, client and server resources, and overall the need of using multiple curl commands, as curl by default closes the connection when end of command is reached.
Keeping the connection open and reusing it is very common for standard clients running a web-app.
Starting curl version 7.36.0, the
--next
or-:
command-line option allows to chain multiple requests, and usable both in command-line and scripting.For example:
From the curl manpage:
我迟到了 13 年,但与这里的所有其他答案相比,我有一些新的东西要补充。
我发现您的网址末尾有一个数字。我最近遇到了同样的问题,数字是从 0 到 13 的连续数字。以下是我如何用一行解决它:
例如,如果您只想要偶数,则可以指定跳转:
这是逐字记录卷曲的联机帮助页:
I'm 13 years late to the party, but I have something new to add compared to all other answers here.
I realized you have a number at the end of the URL. I recently face the same issue and the number was a running number from 0 to 13. Here is how I solved it in one single line:
if you for example only want the even numbers, you can specify the jump:
Here is verbatim from the manpage of curl:
我认为这使用了更多本机功能
I think this uses more native capabilities
按所需顺序编写一个包含两个curl请求的脚本并通过cron运行它,例如
Write a script with two curl requests in desired order and run it by cron, like
这将执行您想要的操作,使用输入文件,并且
输入文件中每行一个条目的速度超级快,它将按照输入文件的顺序
将其保存为whatever.sh并使其可执行
This will do what you want, uses an input file and is super fast
one entry per line on your input file, it will follow the order of your input file
save it as whatever.sh and make it executable
你也可以使用括号 {}
假设你想卷曲
: wildfly_datasources_jdbc_total
b. wildfly_datasources_jdbc_currently
you can also use brackets {}
Let's say you want to curl:
a. wildfly_datasources_jdbc_total
b. wildfly_datasources_jdbc_currently