多请求pycurl永远运行(无限循环)

发布于 2024-11-03 20:18:18 字数 517 浏览 4 评论 0原文

我想使用 Pycurl 执行多请求。代码是: m.add_handle(句柄) requests.append((handle, response))

    # Perform multi-request.
    SELECT_TIMEOUT = 1.0
    num_handles = len(requests)
    while num_handles:
        ret = m.select(SELECT_TIMEOUT)
        if ret == -1: continue
        while 1:
            ret, num_handles = m.perform()
            print "In while loop of multicurl"
            if ret != pycurl.E_CALL_MULTI_PERFORM: break

事实是,这个循环需要永远运行。它没有终止。 谁能告诉我它的作用以及可能出现的问题是什么?

I want to perform Multi-request using Pycurl. Code is:
m.add_handle(handle)
requests.append((handle, response))

    # Perform multi-request.
    SELECT_TIMEOUT = 1.0
    num_handles = len(requests)
    while num_handles:
        ret = m.select(SELECT_TIMEOUT)
        if ret == -1: continue
        while 1:
            ret, num_handles = m.perform()
            print "In while loop of multicurl"
            if ret != pycurl.E_CALL_MULTI_PERFORM: break

Thing is, this loop takes forever to run. Its not terminating.
Can any one tell me, what it does and what are the possible problems?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

╰つ倒转 2024-11-10 20:18:18

你看过 PyCurl 官方代码吗?下面的代码实现了多种功能,我尝试执行它,并且能够在 300 秒内并行抓取大约 10,000 个 URL。我想这正是您想要实现的目标?如果我错了,请纠正我。

#! /usr/bin/env python
# -*- coding: iso-8859-1 -*-
# vi:ts=4:et
# $Id: retriever-multi.py,v 1.29 2005/07/28 11:04:13 mfx Exp $

#
# Usage: python retriever-multi.py <file with URLs to fetch> [<# of
#          concurrent connections>]
#

import sys
import pycurl

# We should ignore SIGPIPE when using pycurl.NOSIGNAL - see
# the libcurl tutorial for more info.
try:
    import signal
    from signal import SIGPIPE, SIG_IGN
    signal.signal(signal.SIGPIPE, signal.SIG_IGN)
except ImportError:
    pass


# Get args
num_conn = 10
try:
    if sys.argv[1] == "-":
        urls = sys.stdin.readlines()
    else:
        urls = open(sys.argv[1]).readlines()
    if len(sys.argv) >= 3:
        num_conn = int(sys.argv[2])
except:
    print "Usage: %s <file with URLs to fetch> [<# of concurrent connections>]" % sys.argv[0]
    raise SystemExit


# Make a queue with (url, filename) tuples
queue = []
for url in urls:
    url = url.strip()
    if not url or url[0] == "#":
        continue
    filename = "doc_%03d.dat" % (len(queue) + 1)
    queue.append((url, filename))


# Check args
assert queue, "no URLs given"
num_urls = len(queue)
num_conn = min(num_conn, num_urls)
assert 1 <= num_conn <= 10000, "invalid number of concurrent connections"
print "PycURL %s (compiled against 0x%x)" % (pycurl.version, pycurl.COMPILE_LIBCURL_VERSION_NUM)
print "----- Getting", num_urls, "URLs using", num_conn, "connections -----"


# Pre-allocate a list of curl objects
m = pycurl.CurlMulti()
m.handles = []
for i in range(num_conn):
    c = pycurl.Curl()
    c.fp = None
    c.setopt(pycurl.FOLLOWLOCATION, 1)
    c.setopt(pycurl.MAXREDIRS, 5)
    c.setopt(pycurl.CONNECTTIMEOUT, 30)
    c.setopt(pycurl.TIMEOUT, 300)
    c.setopt(pycurl.NOSIGNAL, 1)
    m.handles.append(c)


# Main loop
freelist = m.handles[:]
num_processed = 0
while num_processed < num_urls:
    # If there is an url to process and a free curl object, add to multi stack
    while queue and freelist:
        url, filename = queue.pop(0)
        c = freelist.pop()
        c.fp = open(filename, "wb")
        c.setopt(pycurl.URL, url)
        c.setopt(pycurl.WRITEDATA, c.fp)
        m.add_handle(c)
        # store some info
        c.filename = filename
        c.url = url
    # Run the internal curl state machine for the multi stack
    while 1:
        ret, num_handles = m.perform()
        if ret != pycurl.E_CALL_MULTI_PERFORM:
            break
    # Check for curl objects which have terminated, and add them to the freelist
    while 1:
        num_q, ok_list, err_list = m.info_read()
        for c in ok_list:
            c.fp.close()
            c.fp = None
            m.remove_handle(c)
            print "Success:", c.filename, c.url, c.getinfo(pycurl.EFFECTIVE_URL)
            freelist.append(c)
        for c, errno, errmsg in err_list:
            c.fp.close()
            c.fp = None
            m.remove_handle(c)
            print "Failed: ", c.filename, c.url, errno, errmsg
            freelist.append(c)
        num_processed = num_processed + len(ok_list) + len(err_list)
        if num_q == 0:
            break
    # Currently no more I/O is pending, could do something in the meantime
    # (display a progress bar, etc.).
    # We just call select() to sleep until some more data is available.
    m.select(1.0)


# Cleanup
for c in m.handles:
    if c.fp is not None:
        c.fp.close()
        c.fp = None
    c.close()
m.close()

Did you go through PyCurl official codes? The following code implement multi stuff and I tried executing it and I was able to crawl around 10,000 urls in 300 secs in parallel. I think this is exactly what you want to achieve? Correct me, if am wrong.

#! /usr/bin/env python
# -*- coding: iso-8859-1 -*-
# vi:ts=4:et
# $Id: retriever-multi.py,v 1.29 2005/07/28 11:04:13 mfx Exp $

#
# Usage: python retriever-multi.py <file with URLs to fetch> [<# of
#          concurrent connections>]
#

import sys
import pycurl

# We should ignore SIGPIPE when using pycurl.NOSIGNAL - see
# the libcurl tutorial for more info.
try:
    import signal
    from signal import SIGPIPE, SIG_IGN
    signal.signal(signal.SIGPIPE, signal.SIG_IGN)
except ImportError:
    pass


# Get args
num_conn = 10
try:
    if sys.argv[1] == "-":
        urls = sys.stdin.readlines()
    else:
        urls = open(sys.argv[1]).readlines()
    if len(sys.argv) >= 3:
        num_conn = int(sys.argv[2])
except:
    print "Usage: %s <file with URLs to fetch> [<# of concurrent connections>]" % sys.argv[0]
    raise SystemExit


# Make a queue with (url, filename) tuples
queue = []
for url in urls:
    url = url.strip()
    if not url or url[0] == "#":
        continue
    filename = "doc_%03d.dat" % (len(queue) + 1)
    queue.append((url, filename))


# Check args
assert queue, "no URLs given"
num_urls = len(queue)
num_conn = min(num_conn, num_urls)
assert 1 <= num_conn <= 10000, "invalid number of concurrent connections"
print "PycURL %s (compiled against 0x%x)" % (pycurl.version, pycurl.COMPILE_LIBCURL_VERSION_NUM)
print "----- Getting", num_urls, "URLs using", num_conn, "connections -----"


# Pre-allocate a list of curl objects
m = pycurl.CurlMulti()
m.handles = []
for i in range(num_conn):
    c = pycurl.Curl()
    c.fp = None
    c.setopt(pycurl.FOLLOWLOCATION, 1)
    c.setopt(pycurl.MAXREDIRS, 5)
    c.setopt(pycurl.CONNECTTIMEOUT, 30)
    c.setopt(pycurl.TIMEOUT, 300)
    c.setopt(pycurl.NOSIGNAL, 1)
    m.handles.append(c)


# Main loop
freelist = m.handles[:]
num_processed = 0
while num_processed < num_urls:
    # If there is an url to process and a free curl object, add to multi stack
    while queue and freelist:
        url, filename = queue.pop(0)
        c = freelist.pop()
        c.fp = open(filename, "wb")
        c.setopt(pycurl.URL, url)
        c.setopt(pycurl.WRITEDATA, c.fp)
        m.add_handle(c)
        # store some info
        c.filename = filename
        c.url = url
    # Run the internal curl state machine for the multi stack
    while 1:
        ret, num_handles = m.perform()
        if ret != pycurl.E_CALL_MULTI_PERFORM:
            break
    # Check for curl objects which have terminated, and add them to the freelist
    while 1:
        num_q, ok_list, err_list = m.info_read()
        for c in ok_list:
            c.fp.close()
            c.fp = None
            m.remove_handle(c)
            print "Success:", c.filename, c.url, c.getinfo(pycurl.EFFECTIVE_URL)
            freelist.append(c)
        for c, errno, errmsg in err_list:
            c.fp.close()
            c.fp = None
            m.remove_handle(c)
            print "Failed: ", c.filename, c.url, errno, errmsg
            freelist.append(c)
        num_processed = num_processed + len(ok_list) + len(err_list)
        if num_q == 0:
            break
    # Currently no more I/O is pending, could do something in the meantime
    # (display a progress bar, etc.).
    # We just call select() to sleep until some more data is available.
    m.select(1.0)


# Cleanup
for c in m.handles:
    if c.fp is not None:
        c.fp.close()
        c.fp = None
    c.close()
m.close()
戏蝶舞 2024-11-10 20:18:18

我认为这是因为你只跳出了第一个 while 循环,

# Perform multi-request.
SELECT_TIMEOUT = 1.0
num_handles = len(requests)
while num_handles:                           #  while nr.1
    ret = m.select(SELECT_TIMEOUT)
    if ret == -1: continue
    while 1:                                 #  while nr.2
        ret, num_handles = m.perform()
        print "In while loop of multicurl"
        if ret != pycurl.E_CALL_MULTI_PERFORM: break
    '**'

所以如果你使用“break”会发生什么,你将跳出当前的 while 循环(当你使用break时,你处于第二个 while 循环中。)
该程序的下一步将接收此处写为“**”的行,因为这是它跳回的最后一行。
(到 while num_handles 中的第一行)
然后再往下 3 行,它会遇到 'while 1:' 等等......这就是你得到 inf 循环的方式。

所以解决这个问题的方法是:

# Perform multi-request.
SELECT_TIMEOUT = 1.0
num_handles    = len(requests)
while num_handles:                           #  while nr.1
    ret = m.select(SELECT_TIMEOUT)
    if ret == -1: continue
    while 1:                                 #  while nr.2
        ret, num_handles = m.perform()
        print "In while loop of multicurl"
        if ret != pycurl.E_CALL_MULTI_PERFORM: 
            break
    break

所以这里发生的事情是,一旦它脱离了嵌套的 while 循环,它也会自动脱离第一个循环。
(并且由于 while 和之前使用的 continue ,它永远不会到达该行

I think it is because you only break out of the first while loop

# Perform multi-request.
SELECT_TIMEOUT = 1.0
num_handles = len(requests)
while num_handles:                           #  while nr.1
    ret = m.select(SELECT_TIMEOUT)
    if ret == -1: continue
    while 1:                                 #  while nr.2
        ret, num_handles = m.perform()
        print "In while loop of multicurl"
        if ret != pycurl.E_CALL_MULTI_PERFORM: break
    '**'

so what happens if you use 'break', you will break out of the current while loop (you are in the second whileloop when you use break.)
next step for the program would to take in the line written '**' here, since it's the last line it jumps back.
(to the first line in the while num_handles)
and then 3 lines further it runs into 'while 1:' and soforth.. and that's how you get the inf loop.

so to fix this would be:

# Perform multi-request.
SELECT_TIMEOUT = 1.0
num_handles    = len(requests)
while num_handles:                           #  while nr.1
    ret = m.select(SELECT_TIMEOUT)
    if ret == -1: continue
    while 1:                                 #  while nr.2
        ret, num_handles = m.perform()
        print "In while loop of multicurl"
        if ret != pycurl.E_CALL_MULTI_PERFORM: 
            break
    break

so what happens here, is as soon as it break's out of the nested while loop, it will automatically break out of the first loop too.
(and it would never reach the line otherwise because of the while, and the continue used before

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文