Python URLRetrieve限制速率并恢复部分下载

发布于 2024-10-15 23:42:34 字数 806 浏览 2 评论 0原文

我正在使用 此线程 中的代码来限制我的下载速率。

如何将部分下载恢复与速率限制代码结合起来?我发现的示例使用 urlopen 而不是 urlretrieve,并且 RateLimit 类依赖于 urlretrieve

我想要一个外部函数来控制部分下载,而不必更改 RateLimit 类:

from throttle import TokenBucket, RateLimit

def retrieve_limit_rate(url, filename, rate_limit):
    """Fetch the contents of urls"""
    bucket = TokenBucket(10*rate_limit, rate_limit)

    print "rate limit = %.1f kB/s" % (rate_limit,)

    print 'Downloading %s...' % filename
    rate_limiter = RateLimit(bucket, filename)
    #
    # What do I put here to allow resuming files?
    #
    return urllib.urlretrieve(url, filename, rate_limiter)

I'm using the code from this thread to limit my download rate.

How do I incorporate partial downloads resuming with the rate limiting code? The examples I've found use urlopen instead of urlretrieve, and the RateLimit class depends on urlretrieve.

I'd like to have an external function that controls the partial downloading, without having to change the RateLimit class:

from throttle import TokenBucket, RateLimit

def retrieve_limit_rate(url, filename, rate_limit):
    """Fetch the contents of urls"""
    bucket = TokenBucket(10*rate_limit, rate_limit)

    print "rate limit = %.1f kB/s" % (rate_limit,)

    print 'Downloading %s...' % filename
    rate_limiter = RateLimit(bucket, filename)
    #
    # What do I put here to allow resuming files?
    #
    return urllib.urlretrieve(url, filename, rate_limiter)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

蓝天 2024-10-22 23:42:34

也许可以使用 PyCurl 代替:

def curl_progress(total, existing, upload_t, upload_d):
    try:
        frac = float(existing)/float(total)
    except:
        frac = 0
    print "Downloaded %d/%d (%0.2f%%)" % (existing, total, frac)

def curl_limit_rate(url, filename, rate_limit):
    """Rate limit in bytes"""
    import pycurl
    c = pycurl.Curl()
    c.setopt(c.URL, url)
    c.setopt(c.MAX_RECV_SPEED_LARGE, rate_limit)
    if os.path.exists(filename):
        file_id = open(filename, "ab")
        c.setopt(c.RESUME_FROM, os.path.getsize(filename))
    else:
        file_id = open(filename, "wb")

    c.setopt(c.WRITEDATA, file_id)
    c.setopt(c.NOPROGRESS, 0)
    c.setopt(c.PROGRESSFUNCTION, curl_progress)
    c.perform()

May be able to use PyCurl instead:

def curl_progress(total, existing, upload_t, upload_d):
    try:
        frac = float(existing)/float(total)
    except:
        frac = 0
    print "Downloaded %d/%d (%0.2f%%)" % (existing, total, frac)

def curl_limit_rate(url, filename, rate_limit):
    """Rate limit in bytes"""
    import pycurl
    c = pycurl.Curl()
    c.setopt(c.URL, url)
    c.setopt(c.MAX_RECV_SPEED_LARGE, rate_limit)
    if os.path.exists(filename):
        file_id = open(filename, "ab")
        c.setopt(c.RESUME_FROM, os.path.getsize(filename))
    else:
        file_id = open(filename, "wb")

    c.setopt(c.WRITEDATA, file_id)
    c.setopt(c.NOPROGRESS, 0)
    c.setopt(c.PROGRESSFUNCTION, curl_progress)
    c.perform()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文