我遇到了 curl_multi_* 的问题,我想创建一个类/函数,它接收 1000 个 URL,并一次处理所有这些 URL 5,因此当 URL 完成下载时,它会将现在可用的插槽分配给尚未处理的新 URL 。
我见过一些 curl_multi的实现,但它们都不允许我做我想做的事情,我相信解决方案在于使用 curl_multi_select 但文档不是很清楚,用户注释也没有多大帮助。
任何人都可以为我提供一些如何实现此类功能的示例吗?
I'm having a problem with curl_multi_*, I want to create a class / function that receives, lets say 1000 URLs, and processes all those URLs 5 at a time, so when a URL finishes downloading it will allocate the now available slot to a new URL that hasn't been processed yet.
I've seen some implementations of curl_multi, but none of them allows me to do what I want, I believe the solution lies somewhere in the usage of curl_multi_select but the documentation isn't very clear and the user notes don't help much.
Can anyone please provide me with some examples how I can implement such a feature?
发布评论
评论(1)
这是一种方法。该脚本将一次获取任意数量的 URL,并在每个 URL 完成后添加一个新 URL(因此它始终获取 $maxConcurrent 页)。
Here's one way to do it. This script will fetch any number of urls at a time, and add a new one as each is finished (so it's always fetching $maxConcurrent pages).